Tag Archives: SFD13

Is there still a need for specialized administrators?

We have been hearing the cliché’s for quite some time now within the technology industry.  Sayings like “Breaking down silos” and “Jack of all trades, Master of none” have been floating around IT offices for the past 5 years – and while I believe that these sayings certainly hold some clout I still have my doubts about this new “Generalized IT Admin”.  Honestly, with the changing landscape of technology and the fast paced change we see introduced into our infrastructure by all means we need to know (or know how to quickly learn) a lot – A LOT.  And while this generalized, broad skill set approach may be perfect for the day to day management of our environments the fact is, when the sky clouds over and the storm rolls in, taking with it certain pieces of our data centers we will want to have those storage specialists, that crazy smart network person, or the flip-flop wearing virtualization dude who knows things inside and out available to troubleshoot and do root/cause on issues in order to get our environments back up and running as quickly as possible!

Just a disclaimer of sorts – every article, comment, rant or mention of SFD13 that you find here has been completed on my own merits. My travel, flights, accommodations, meals, drinks, gum, etc were all paid for by Gestalt IT, however I’m not required or obliged to return the favor in anyway other than my presence 🙂 – Which still feels weird to say 🙂 Well, my presence and possible a little bit of Maple Syrup.

Now all this said these problem situations don’t come (I Hope) that often and coupled  with the fact that we are seeing more and more “converged” support solutions, organizations can leverage their “one throat to choke” support call and get the specialists they need over the phone – this all leads them one step closer to being able to employ these “Jack of all trades, Master of none” personnel in their IT departments.  But perhaps the biggest stepping stone in eliminating these specialized roles is the new rage being set forth by IT vendors implementing a little concept called “Policy Based Management”.

Enter NetApp SolidFire

Andy Banta from NetApp SolidFire spoke at Storage Field Day 13 about how they are utilizing policy based management to make it easier and more efficient for everyday administrators to consume and manage their storage environments.  I got the chance to sit as a delegate at SFD13 and watch his presentation, cleverly titled “The death of the IT Storage Admin” – and if you fancy you can see the complete recorded presentations here.

NetApp SolidFire is doing a lot of things right in terms of taking the steps to introduce efficiency into our environments and eliminate a lot of those difficult mundane storage tasks that we used to see dedicated teams of specialized administrators perform.  With that said let’s take a look at a few of those tasks and explore how NetApp SolidFire, coupled with VMware’s VVOL integration is providing policy based automation around them.

Storage Provisioning

In the olden days (and I mean like 5 years ago) the way we went about provisioning storage to our VMware environments could be, how do I say this, a little bit inefficient.  Traditionally, we as “Generalized VMware administrators” would determine that we needed more storage.  From there, we’d put a request out to the storage team in order to provision us a LUN.  Normally, this storage team would come back with all sorts of questions – things like “How much performance do you need?”, “How much capacity do you need?”, “What type of transport mechanism would you like this said storage to be delivered over?”, “What type of availability are you looking for?”.  After answering (or sometimes lying) our way through these conversations the storage team would FINALLY provision the LUN and zone it out to our hosts.  We, then create our datastore, present it to our ESXi hosts and away we go filling it up – only to come back to the storage team with the same request the very next month.  It’s not a fun experience and is highly inefficient.

VMware’s VVOLs is a foundation to help change this and NetApp SolidFire has complete integration points into them.  So in true VVOLs fashion we have our storage container, which consumes space from our SolidFire cluster on a per VM/disk fashion.  What this means is as administrators we simply assign a policy to our VM, or our VM disk and after that our vmdk is provisioned automatically on the SolidFire cluster – No LUNs, no storage team conversations – all performed by our “generalized admin”.

Storage Performance/Quality of Service

Now as far as VVOL provisioning capacity goes there isn’t a whole lot that is different between SolidFire and other IT storage vendors – but when we get into QOS I think we can all agree that SolidFire takes a step above the crowd.  SolidFire has always focused on the point that application performance and quality of service is the number one most important piece of their storage – and with their VVOL implementation this is still true.

When setting up our policies within the vSphere SPBM, NetApp SolidFire exposes a number of metrics and configuration options as it pertains to QOS in our rule setup.  We can configure settings allowing us to set minimum, maximum and burst IOPs on both our data VVOL(the vmdks) as well as our configuration VVOLs (vmx, etc.).  Once setup we simply apply these policies to our VMs and immediately we have assurance that certain VMs will always get the performance they need – or, on the flip side, certain VMs will not be able flood our storage, consuming IOPs and affecting their neighboring workloads.  This is a really cool feature IMO – while I see a lot of vendors allowing us to do certain disk type placement for our VVOL (placing vmdk on SSD, SAS, etc.) I’ve not see many that go as deep as SolidFire allowing us to guarantee and limit IOPs.

This essentially removes the complexity of troubleshooting storage performance needs and constraints on our workloads – the setup is all completed within the familiar vSphere Web Client (complete with a NetApp SolidFire plug-in) and is simply applied the same way as you have always just edited a VMs settings.

So – is the storage admin dead?

NetApp SolidFire has definitely taking a good chunk of the storage admins duties away and put them into the laps of our generalized admins!   Even though I haven’t mentioned it, even the scaling of NetApp SolidFire cluster, as well as the VASA provider failover is all automated in some way within their product.  So, yeah, I think they are on the right track – and they have taken some very difficult and complex tasks and turned them into a simple policy.  Now I wouldn’t jump to conclusions and say that the storage admin is 100% dead (there is still a lot storage complexities and a lot of storage related tasks to do within the datacenter)  but NetApp SolidFire have, how do I put this – maybe just put them into a pretty good coma and has them lying in a hospital bed!   If you have made it this far I’d love to hear your take on things – leave a comment, hit me up on twitter,  whatever…  Take a look at the NetApp SolidFire videos from SFD13 and let me know – do you think the storage admin is dead?  Thanks for reading!

SNIA comes back for another Storage Field Day

SNIA, the Storage Networking Industry association is a non-profit organization that is made up of a number of member companies striving to create vendor-neutral architectures and standards throughout the storage industry.  Think Dell, VMware, HPE, Hitachi, all the likely names all behind closed doors working for the greater good.  Ok – that’s their definition.  Mine?  Well, I compare it to Rocky III – you know, Rocky and Apollo, sworn enemies teaming up to make the world a better place by knocking out Mr. T.   So, I may be a little off with that, but not that far off!  Replace “Rocky and Apollo” with some “very big name storage companies” and swap out “knocking out Mr. T” with “releasing industry standards and specifications” and I think we are pretty close.

Just a disclaimer of sorts – every article, comment, rant or mention of SFD13 that you find here has been completed on my own merits. My travel, flights, accommodations, meals, drinks, gum, etc are all paid for by Gestalt IT, however I’m not required or obliged to return the favor in anyway other than my presence 🙂 – Which still feels weird to say 🙂 Well, my presence and possible a little bit of Maple Syrup.

So, in all seriousness SNIA has been around for 20 years and was formed initially to deal with interoperability issues surrounding networked storage.  Today we now see them really focusing on architectures, standards, as well as a slew of education services, training and certifications.    Today we can see a ton of work being performed by SNIA around current storage trends such as flash, cloud, object-storage, persistent memory, etc..  You name it, they have some work being done around it.    From their website here is a handful of work that SNIA is currently investigating…

  • Cloud Data Management Interface (CDMI)
  • Linear Tape File System (LTFS)
  • IP Based Drive Management Specifications
  • NVM Programming Modle
  • Self-contained Information Retention Format
  • Solid State Storage Performance Test Specifications
  • Swordfish Scalable Storage Management APIs

Wait!  They aren’t selling anything!

Honestly, I’ve never been to a Tech Field Day event where a non-profit organization has spoke – so I’m very excited to see what SNIA will chose to talk about!  As shown above they have a broad range of topics to chose from – and by watching past SNIA videos from TFD they can go quite deep on these subjects.  It will be nice to hear a vendor-neutral approach to a TFD session.  I applaud SNIA for their efforts – it can’t be easy organizing and keeping all of its members in check – and it’s nice to see an effort from a company, be it non-profit or not, looking out for the customers, the partners, the people that have to take all of these storage arrays and protocols and make them all work!  As always, follow along with all my SFD13 content here – keep your eye on the official event page here – and we will see you in June!

X-IO Technologies – A #SFD13 preview

In terms of the technology sector we always joke that when startups are 5 years old that sometimes makes them legacy!  Meaning, 5 years is a long time in the eyes of a technologist – things change, tech changes, new hardware emerges.  All of this drives change!  Well if a 5 years makes a mature company than I’m not sure what to call X-IO Technologies.  X-IO was founded nearly 25 years ago, in 1995 – taking them right off the scale in terms of aging for a tech company!   Honestly, I’ve heard the name before (or saw the logo) but I’ve never really looked at what it is X-IO does – so today let’s take a look at the current X-IO offerings and solutions and what they have to bring to the table – and, if you are interested you can always learn more when they present at the upcoming Storage Field Day 13 event in Denver come June 14th – 16th.  But for now, the tech…

Just a disclaimer of sorts – every article, comment, rant or mention of SFD13 that you find here has been completed on my own merits. My travel, flights, accommodations, meals, drinks, gum, etc are all paid for by Gestalt IT, however I’m not required or obliged to return the favor in anyway other than my presence 🙂 – Which still feels weird to say 🙂 Well, my presence and possible a little bit of Maple Syrup.

What does X-IO bring?

From their website it appears that X-IO have a couple basic offerings, all hardware appliances, and all serving different points of interest in the storage market.  Let’s try and figure out what each of them does..

Axellio Edge Computing

This appears to be an edge computing system marketed mainly companies needing performance for big data analytics as well as those looking for a platform to crunch data from IoT sensors.   These converged storage and compute boxes are very dense in both CPU, Memory and Storage.  Supporting up to 88 cores of CPU, 2TB of memory, and a maximum of 72, yes 72 2.5” NVMe SSD drives.  Each appliance is basically broke down into two server modules for the compute and memory, as well as up to 6 FlashPacs (A FlashPac is essentially a module hosting 12 dual ported NVMe slots).  As far as scale goes I don’t see much mention in terms of pooling appliances, so it appears that these are standalone boxes each serving a single purpose.

iglu Enterprise Storage Systems

Here it appears we have a storage array.  The iglu storage system can be built using all flash, or a mixture of both flash and disk, or, just spinning disk itself.    They appear to have multiple models supporting each disk configuration, with their all-flash version supporting over 600,000 IOPs.  Controllers on the iglu system are distributed, meaning whenever we add more capacity we are also adding more controllers, thus increasing both space and performance with the same upgrade.  As far as software goes we see all the familiar features such as snapshots, CDP, replication, stretched clustering features,  integration with VMware, SQL, Oracle, etc.…  One nice aspect to this is that all iglu systems, no matter the model, have access to all of the software features – there is no having to license individual aspects of the software.

I’m excited to see what X-IO has to say at SFD13 come this June.  There was some mention of some unique way of handling drive failures, as well as offering a lengthy 5 year warranty on everything which may separate them from the storage vendor pack – but I’m hoping they have much more to talk about in regards to their storage offerings to give it that wow factor!  As always you can find all my SFD13 related information here or follow the event page here to stay updated and catch the live stream!

Hear more from Exablox at Storage Field Day 13

As I continue along with some SFD13 previews I stumble upon Exablox.  Exablox is one of the few presenting companies at SFD13 that I know very little about so I’m excited to hear what they have to offer and say come June 14th in Denver when the event kicks off.  Also,  Exablox is coming into SFD after being acquired by StorageCraft earlier this year and given the partnerships between the two companies in the past I’m sure we will hear some more in regards to how this integration is going.  Headquartered in Mountain View Exablox was the brainchild of Matthew Catino, Tad Hunt and Frank Barrus. Founded in 2010 the three set out to create a more scalable filer array – and in the end we are left with a product they call OneBlox.

Just a disclaimer of sorts – every article, comment, rant or mention of SFD13 that you find here has been completed on my own merits. My travel, flights, accommodations, meals, drinks, gum, etc are all paid for by Gestalt IT, however I’m not required or obliged to return the favor in anyway other than my presence 🙂 – Which still feels weird to say 🙂 Well, my presence and possible a little bit of Maple Syrup.

OneBlox – A reimagined scale-out storage solution.

Honestly when I first started diving into OneBlox I pretty much assumed it was just a simple filer box that you purchased chalked full of drives – but after some more digging there is honestly a lot of differences between OneBlox and a lot of the other scale out NAS boxes I’m familiar with.

  • You can bring your own drives – yeah, you can put pretty much whatever drives you want in these things!  Different speeds and capacities? No problem
  • It runs off of object storage – underneath all that SMB/NFS presentation is a heaping helping of object based storage, which is a big enabler for our next point
  • There is no RAID – rather than utilizing parity to protect against failure OneBlox utilizes their custom based object file system to intelligently write 3 copies of every object, ensuring your data is written on not only different drives, but different nodes within a ring as well.  Wait! What’s a ring?
  • Their ring architecture – A  ring is essentially a cluster of 1 or more OneBlox nodes.  All nodes within a single ring are essentially pooled to form one single global file system which shrinks and grows as drives and nodes are added.  Item such as deduplication and compression are all formed globally across this file system – meaning we have inline deduplication across multiple OneBlox nodes.

OneSystem – Cloud Based Management

As of late it seems like everyone is jumping on this “cloud based” management solution – and rightly so.  We get the ability to manage our infrastructure from anywhere with a browser and an internet connection.   OneSystem is Exablox’s play in this field.  OneSystem essentially allows administrators to access all of  their deployed OneBlox’s from a secure, available, cloud-based management server through a browser.  OneSystem provides real-time monitoring and health reporting as well as supports a multi-tenant environment for  those service providers and large enterprises that may need it.  Don’t trust the cloud just yet?  No problem, the whole OneSystem can be deployed on premises as well if need be.

As I mentioned earlier I don’t know a lot about Exablox so I’m excited to see what they have to say at SFD13.  I read a little in regards to some unique CDP and replication strategies they have as well for protecting data between different  rings.  From the surface it looks like some cool tech and I can’t wait to learn more about some of their biggest use cases and see more about how that underlying custom object based file-system works!  Hopefully we see more at SFD13!  For those that want to follow along I’ll have the live stream along with any of  my SFD13 related content on my page here, or be sure to stay up to date by keeping an eye on the official SFD13 event page.  And hey, follow along and ask questions using the hash tag #SFD13 – See you in June!

Primary Data set to make their 5th appearance at Storage Field Day

This June @ SFD13 Primary Data will make their 5th appearance at a fully fledged Tech Field Day event despite having just GA’d their flagship product just 9 short months ago at VMworld 2016 (They have a few TFD Extra events as well).  Now, pardon my math but for a company that was founded in 2013 that means they spent roughly 3 years in development – gathering loads of customer feedback and beta testing before releasing their data management platform to the wild, all the while giving updates via TFD – there’s an example of a company taking it’s time and doing something right!

Just a disclaimer of sorts – every article, comment, rant or mention of SFD13 that you find here has been completed on my own merits. My travel, flights, accommodations, meals, drinks, gum, etc are all paid for by Gestalt IT, however I’m not required or obliged to return the favor in anyway other than my presence 🙂 – Which still feels weird to say 🙂 Well, my presence and possible a little bit of Maple Syrup.

Now, before we go into detail about Primary Data’s products and what they provide I wanted to take a second to talk a little bit about their founders and leadership team – because we all know great products and execution start from the top – and Primary Data has quite the top!  Let’s have a look below…

Lance Smith (CEO) – Despite holding many top level leadership positions over the years perhaps the most relevant would be COO of Fusion-io and following the acquisition moving into Senior Vice President and GM inside of SanDisk.

Rick White (Co-Founder and CMO) – This isn’t Rick’s first go around as he was also one of the orginal co-founders and CMO at Fusion-io.  Are you beginning to sense a little pattern here around Fusion-io? Smile

David Flynn (Co-Founder and CTO) – Here comes that pattern again!  Prior to founding Primary Data David Flynn was a co-founder and CEO at Fusion-io.  Now reading his bio off of the site it also states that David holds over 100 patents across a wide range of technologies – not too shabby.

Steve Wozniak (Chief Scientist) – Yeah, that’s right, the Steve Wozniak, you know, the guy from Dancing with the Stars – oh, and he kind of helped found and shape a small little valley startup named Apple.

Now keep in mind these are just 4 people that I have picked out off of Primary Data’s company page!  And even though there is a ton of brainpower listed here, there’s still a lot more people at Primary Data whose experience in the industry that just blows my mind!

So what exactly does Primary Data bring to market?

Primary Data’s key storage solution focus around their flagship product DataSphere.  Now DataSphere in itself isn’t a storage array, being best more described as what they coin a “Metadata Engine for the Enterprises”.   So what does this really mean?  Well, hopefully I’m getting it here but to me DataSphere looks like it is somewhat of an abstraction tool – a way to de-couple all of your applications from the data that they use and store.  The first step is to take all of an organizations storage and pools it together into a single logical namespace.  It’s this storage, be it direct-attached, SAN based, or even cloud which can be in turn presented back to your enterprise applications.   But its not necessarily this pooling which drives up the value of DataSphere – it’s the analytics and automatic data movement that really stands out for me.  DataSphere is able to map a set of application rules or objectives to automatically move data across different tiers of storage in order to ensure that certain SLA’s are met, or more-so, that the right resources are being assigned for the right applications.  Meaning the proper storage resources are provisioned to applications, no matter where that application is running, no matter where that storage is running – All we need to do is specify that we want Application X to have Storage Requirements Y and let DataSphere tier and tune to its heart delight!  The metadata engine keeps track of where things are and whats going on.

As you can imagine this gives us a really good opportunity to scale in terms of storage – and really prevents the need to over provision which has become such a defacto standard in deployments today – in turn, organizations save dollars!    Now from what I can tell DataSphere doesn’t necessarily sit inside the data path for the application either – meaning it’s virtually invisible and not affecting performance in anyway.  Instead it lies in somewhat of an out of band architecture – allowing applications to have direct paths to their data while DataSphere simply handles the metadata processing and analytics.

There are a ton of benefits that I can initially see with DataSphere – thinking of scaling, migration to cloud, and simply performance compliance are a few that come straight to mind.  I’m most certain there is much much more than can be done and I know I’m just scratching the surface, so with that I guess we will have to wait until Primary Data presents at Storage Field Day in June to learn more.  As always follow along with the hashtag #SFD13 or check out my page here for all that is SFD13!

A field day of Storage lay ahead!

SFD-Logo-400x400I’ve had the awesome opportunity to participate in a few tech field day events throughout the last few years, travelling to Austin for VFD4, Boston for VFD5, and finally San Jose for TFD12 just last November!  To be honest, these days provide me with more knowledge than a lot of the week long training courses I take!  They are jam packed with technical deep dives from a variety of companies – companies who are eager to share and get the message out in regards to their offerings!  And it’s not just the presenting companies jamming the knowledge in my brain, it’s the delegates as well!  I’ve honestly met some of the smartest people I know there being a fellow delegate – not just on virtualization, but spreading across the whole IT ecosystem.   Anyways, I’m super excited to have been invited back to another field day experience – this one, a storage event, taking place June 14-16 in Denver!


As it stands today Storage Field Day 13 is shaping up to have 6 presenting companies – all with their own different ties into the storage market.  We have everything from the tried and true companies such as DellEMC and Seagate all the way through to the startup technologies of Primary Data and the newly acquired Exablox.  In between sits the non-profit, vendor-neutral SNIA along with the hometown Colorado based X-IO Technologies.

DellEMC_Logo_Prm_Blue_Gry_rgb-100x18 exablox-logo-1200

Certainly a wide variety of companies to hear from – which should help to keep the event both exciting and interesting!

I mentioned that I’ve gained a lot of knowledge from other delegates in the past – and man o man this one will be no different.  I’m actually super excited for this event as I’ve really only met a few people on this list – so there will be a lot new faces and friends to make here for me.  Honestly, it’s a little bit intimidating as there are some real storage rockstars in this list!

I’ll do my best to try and get a few preview posts out for some of those presenting companies I know little about.  Mainly for my own homework so I don’t “go in cold” and get thrown into the deep end Smile  That said I can’t promise anything as this event is quickly sneaking up on me and only a couple weeks away now!   As always I’ll try to get the stream setup on my event page here – as well as place any content I create surrounding SFD13.   Be sure to follow along on twitter as well using the hashtag #SFD13 and be sure to keep an eye on the official event page!  See you in Denver!