Category Archives: Posts

Veeams willingness and responsiveness to change may be their biggest asset.

Let’s set the stage for this article, shall we?

At VMworld 2017 this year I had the chance to attend Veeams’ Press Breakfast.  Now, I’ve been to press events before and very seldom have I ever wrote an article about them – I mean, they aren’t a technical deep dive into any technology – sometimes there is embargo pre-release information given out, but really the goal is to lay out your companies broader vision and scope for the press and analysts in attendance.

So why an article this time?

It’s not because the eggs were delicious or the coffee was strong – also, it’s not due to any peeks into the future or pre-release information as there was none handed out.  Honestly – the very first slide of co-CEO Peter McKay’s presentation is what stood out for me.   That slide contained the exact same image as shown below.

The gist – it’s not the strongest that survives, nor the smartest – but the one that can best adapt to change around them.  This got the wheels in my head turning a bit – how could this quote, a quote written by a man over 200 years ago – how could this possibly apply to a tech company in today’s world?  Which kind of led to me to a final thought…

How has Veeam changed over the years?

I don’t recall the exact day I started using Veeam Backup & Replication.  It was sometime around the v3 days – but what I can remember is some of their messaging over the years – and how that has changed…

#1 for VMware

This one phrase resonated with me – at the time anyway!  Most organizations in the 2008-2010 era were heavily focused on deploying VMware environments and virtualizing their workloads.  We quickly realized that we needed some way to protect these VMs –  something that was different from our traditional backup methods – which brings us to the next era of Veeam messaging

Purpose Built for Virtualization

We saw an influx of Hyper-V environments being built as the industry was finally getting to a point where Hyper-V was “good enough”.  Therefore, we saw Veeam add support for the new Hypervisor and change their messaging to Purpose Built!  We needed something different from our legacy backup software to work within our virtualization environments – and Veeam was delivering on this with software designed around virtualization.

Availability for the Modern Datacenter

With the influx of cloud services occurring over the next little while we once again saw Veeam adapt – adding the ability to offload backup copies to the cloud as well as their own service providers via Cloud Connect.  We see the “purpose-built for virtualization” messaging dwindling away as Veeam began to release products which handled physical endpoint backups for both Windows and Linux.   Veeam was turning more and more into an availability company – moving away from the data protection business.

Availability for the Always-On Enterprise

And here we are today – Veeam has changed in a lot of ways and added a slew of new products and functionality to adapt to the way data centers are operated.   Today Veeam is aiming to provide availability to our data no matter where it lives – in a hypervisor, in the cloud, via SaaS applications – Veeam is trying to be the end all be all when it comes to providing availability of the data!

So yeah, we have definitely seen growth within Veeam as a company over the last decade!  Everyone notices that – but hopefully this helps shed some light on how the organization itself has had to adapt to change!  In technology, nothing is static – we are always seeing new bright and shiny things – and as Charles said 200 years ago, those who can adapt to change, or in today’s terms, respond to how the market is fluctuating in front of them will be the ones who survive!   Call it agility, responsiveness, innovation, whatever – in the end it all comes down to how a company responds – and this is exactly what companies like Veeam need to do to not just stay ahead in today’s world – but stay afloat!

So yeah, a post about a press breakfast!  Thanks for reading!

Ravello expanding HVX platform to bare metal.

I have had the chance to work with Ravello quite  a lot in the last few years – participating in their initial vExpert access program testing out their beta of running nested ESXi – as well as getting a nifty deep dive into their HVX offering at a Tech Field Day a few years ago.  The tech is awesome – the people are great.

One thing I will admit though – when they were purchased by Oracle I was worried.

You see, sometimes when giant corporations such as Oracle swallow up a start-up such as Ravello we see some of the tech disappear – meaning, the bigger company takes the cut of meat they want off of the bone and tosses the rest into the garbage never to be seen again!

All that said this certainly was not the case with Oracle and Ravello – the vExpert access that they so graciously offered up, well it remained – and over the past couple weeks I got a sneak peek into what exactly Oracle and Ravello have been up to – and to be honest, it’s only gotten better!

Ravello on Oracle Cloud Infrastructure

Ravello’s HVX was a key component of us being able to forklift our VMware VMs as they are and place them into either the Amazon or Google Cloud computing environments – or, if we fancied we could simply turn on a flag inside Ravello and install ESXi directly in the cloud.   In the end, we had this sort of Hypervisor inception if you will – a KVM fork running HVX which in turn ran ESXi and then our VMs.  As you can imagine there were some performance bottlenecks in doing so – not to say that Ravello was slow by any means – to be honest, the performance really surprised me when I started working with it.    In order to properly execute the underlying instructions in the traditional Ravello environments they used something called binary translation.  This had to be used due to underlying hardware instruction sets which were not exposed to Ravello – therefore Ravello would essentially invent their own through software.  The binary translation, coupled with the “nestedness” of the solutions, and with some hard limits on number of CPUs/Memory per VM tended to place Ravello into the Testing/Lab/Pre-Production folder of workload types…

Well, today, that all changes – today we now have a couple other options when it comes to deploying our VMs inside of Ravello.  We now have regions which are tied to the Oracle Cloud Infrastructure (OCI).  OCI is basically Oracles cloud, however in the sake of Ravello and HVX we have a couple different flavours of platforms we can deploy on.  First, now that Ravello essentially owns the infrastructure HVX is running on we can use something called Hardware assisted Nested Virtualization – in short, the hardware underneath supports exposing those instruction sets directly up to the HVX hypervisor – thus improving speeds and performance with that.

But Ravello on OCI doesn’t stop here either.  Today we also have a new flag that we can apply to our workloads within Ravello – The preferPhysicalHost parameter can now be set to true on a per-VM basis, and when doing so we essentially can run our workload directly on top of HVX, which in turn is running directly on bare metal.  Here we can execute our physical instruction set directly on bare metal hardware, without the need for any sort of software or hardware translations at all.  As shown below certainly the bare-metal option gives us the best performance out of the three options as there is now software or hardware translations happening whatsoever.

So what type of performance are we talking about?

As I mentioned earlier I really didn’t have an issue with the Ravello performance before the Oracle acquisition – maybe I’m just a patient guy but I think I just didn’t know what else could be done.  To get a bit of a baseline on performance I decided to execute the PassMark benchmark on the three different Ravello regions – and ended up with the following results

Software based nested Virtualization (AWS/GCE) Oracle Cloud Infrastructure Hardware Assisted nested virtualization Oracle Cloud Infrastructure Bare Metal
passmark-oraclehardwareassist passmark-normal Ravello-BareMetal

As you can see we gain some great performance when moving into hardware assisted virtualization, however the jump from software based instruction sets to bare-metal is nearly double the PassMark benchmark score – a pretty valid increase!  Now I know sometimes these benchmarks mean very little but there has to be some truth to the increase in the numbers right?  Either way, the difference in performance is most certainly noticeable – to be honest, I started the bare metal test directly after the software based test – and it nearly finished first.

So with these new performance increases, and support to larger VMs consuming more resources I’m sure we will see Ravello on Oracle Cloud Infrastructure make a bigger push into the enterprise, looking to migrate some of those production workloads into the cloud.    I’m happy for Ravello and happy that Oracle has put forth the resources they have to make Ravello and HVX even better than it was.  It really is a solid service which provides some very unique and interesting opportunities as it applies to networking and cross cloud configurations.  You can find the official Ravello announcement here – or if looking for more community blogs give #RavelloOCI a search on Twitter!  Thanks for reading!

NetApp SolidFire HCI – Scale what you want, when you want

To say that when NetApp announced their HCI solution back in June created a bit of a kerfuffle would certainly be an understatement.  There has been a lot of back and forth on whether the Compute/Storage node system should be classified as an HCI solution or just a CI solution – and while I find the comments from El Reg interesting and sometimes very entertaining to read I personally feel that the argument is null – meaning who necessarily cares if it’s HCI or CI, or how it’s packaged, or what the many definitions of HCI are – to me, as a customer, in the end all I care about is whether the hardware/software combination can solve a problem I have, or deliver on a project I need to complete.    I got a close look at the solution when Gabriel Chapman from NetApp Solidfire presented at a recent Tech Field Day Extra event at VMworld – so while those “wordy” battles ensue, let’s us take a closer a look at the technology behind this…

Just a disclaimer of sorts – every article, comment, rant or mention of Tech Field Day Extra at VMworld 2017 that you find here has been completed on my own merits. My flights and accommodations to/at the event were kindly paid for by Gestalt IT – and I received a conference pass to VMworld through VMUG – however I’m not required or obliged to return the favor in any way. 


So what’s it look like?

NetApp HCI from a physical standpoint looks much like all other HCI solutions we are used to – a 2U chassis with the ability to have to 4 nodes in each chassis – pretty standard.  However it’s what’s inside those nodes which differentiates the NetApp HCI solution from others.  Instead of each node providing both compute and storage, NetApp splits their nodes into individual functions – meaning a node can contain just compute, a custom NetApp solution or a node can be a storage node, an all flash SolidFire solution.  The base package, or starter package if you will, does have a minimum footprint which is 2 chassis containing 6 nodes (4 storage nodes and 2 compute nodes), with two empty slots available for expansion.  Once you meet this minimum requirement customers and mix and match nodes as they please – want 10 storage nodes and 10 compute nodes, no problem.  How about 4 storage nodes and 7 compute nodes – again, no issues as the minimum requirements of 4 storage and 2 compute are met.

The nodes are essentially all we need as well.  Meaning we don’t have the traditional VM or controller workload that sits on top of each node to pool resources – all of the functionality is built into the nodes themselves, allowing the customer to utilize the resources provided for their workloads.


As you can see this isn’t the same design as the traditional HCI solutions we are used to – and this is what causes most of discussion around NetApp HCI.  As I mentioned earlier though in the end a solution really needs to meet a customers needs, and NetApp HCI does this by designing their solution around three main objectives, Guaranteed Performance, Flexibility & Scale, and Automated Infrastructure.

Guaranteed Performance

gp SolidFire itself provides some pretty nifty functionality when it comes to QoS – and it’s this technology that drives the guaranteed performance benefits of the NetApp HCI solution.    By providing the ability to define and enforce minimum, maximum and burst settings on a per VM level, NetApp is able to dynamically allocate and manage storage performance on our workloads.  For example, we can define a certain workload to always have a basic number of IOPs, allowing it to burst to a higher level for those one-off, month end type situations.  In addition to that we can set a maximum number of IOPs a neighboring workload can consume, ensuring that it does not in fact steal the performance needed by other workloads.  What this does is allows us to take many different types of workloads and run them within the same infrastructure, with the confidence in knowing that we can guarantee the performance needed for each and every workload we run.

Flexibility & Scale

fs Due to the way the NetApp HCI solution is designed it provides much greater flexibility when it comes time to scale your solution – meaning we can scale just our compute nodes if compute is an area where we are constrained.  Or, say we need just more storage, we can simply just add more storage nodes into the pool – or, both compute and storage can be added to existing or new chassis to meet the demands of our workloads.  NetApp certainly believes that one size doesn’t nessessarily always fit all – and allows for that with their architectural designs when it comes to scale.  Also, NetApp believes that just because we have this shiny new piece of kit sitting in our datacenters we aren’t simply going to toss all of our older stuff.  That’s why the NetApp HCI solutions isn’t limited to just the compute nodes that sit inside the chassis – but in fact, any external solution that needs storage could, in essence, pull from the HCI storage nodes – you have some investment in some SolidFIre storage there – might as well leverage it!

Automated Infrastructure

am Most modern solutions we deploy within our datacenters today involve some sort of published API consumption as well as rely heavily on automation – and NetApp HCI is no different.  The initial deployment of the solution is all hinged up on the execution of something called the NetApp Deployment Engine (NDE).  This essentially takes in some credentials and IP information from the user and automates the installation of the Element OS on our storage nodes, VMware ESXi on our compute nodes, as well as provisions vCenter complete with the HCI plugin and configures our networks, datastores, and vCenter inventory items accordingly – all within roughly 45 minutes from start time.  As far as the ability to build your own automation around the solution NetApp has a very comprehensive API that they offer around the solution.  Interesting fact – the first three iterations of the SolidFire release came without a GUI, and nothing but a bunch of API calls.  So  yeah, if you need to automate anything you should be able to find some sort of hook into the NetApp HCI solution to do so.

In the end it was great to hear from Gabe and to hear about how NetApp is entering the HCI market.  The solution is certainly unique, but is also very flexible when it comes to scalability – anytime a company provides a solutions which offers choice when it comes to scale it is a good move in my books.  The war on whether this truly is an HCI solution though, in my opinion is really null and void – Certainly the analysts may care, the people who feed whether or not a company or solution is included on a magic quadrant may care – but myself, I don’t!  What I care about is finding solutions to my organizations technical problems – and NetApp HCI could certainly fit in certain areas – be that said, traditional, first gen HCI as they say can also solve certain needs.  So, call it HCI or just call it CI, but in the end NetApp has a nifty solution that independently scales and solves certain customer needs – throw in some Solidfire QoS and automation sauce on top of that and that certainly gets a check in my books!    If you want to learn more about NetApps HCI solution head on over to the TFD event site and watch the videos!

Consuming the Veeam REST API in PowerShell – Part 1 – Starting a job

Since the version 7 release of Veeam Backup & Replication all of the typical Enterprise Manager functionality has been exposed via an XML REST API.  Being a pretty hefty user of vRealize Orchestrator this has proven to be extremely useful to me when looking to automate certain parts of my infrastructure.  Now that said, there are times when vRO is simply out of reach – or when the person I’m creating the automation for is simply more familiar with PowerShell.  Now, I understand that Veeam Backup & Replication does come with PowerShell support and what I’m about to walk through may be somewhat redundant as they have their own cmdlets built around certain tasks – but this crazy IT world we live in is changing and REST seems to be at the helm of that.  We are seeing more and more vendors first creating a REST API and then consuming that themselves in order to provide customers with a GUI front-end.

So, in the spirit of learning how to work with the Veeam REST API I decided I’d take the time to document out how to perform some of the sample functions within their API reference using nothing but PowerShell.   This first post, will deal solely with how to start an existing Veeam Backup & Replication job.   Keep in mind the shear nature of REST is that although the bodies and headers may change, the process of consuming it is relatively the same no matter what the application – so there is some valid learning to be had regardless of the end product.

PowerShell and interacting with REST.

Before jumping right into Veeam specifics we should first discuss a few things around the PowerShell cmdlet we will need to use – as well as specifics around the Veeam Enterprise Manager REST API itself.  REST APIs are nothing more than simple http requests sent to an endpoint – meaning they are consumed by simply sending a request, be it a GET, PUT, POST, etc. – whatever the API supports to a uri.  From there, the API takes a look at what was passed and returns back what it normally would with an http request – a header, a status code, and a body – Its this response that we need to parse in order to discover any details or information pertaining to our request – it lets us know whether or not the operation was successful, and passes back and valid data as it relates to the request.  Now, in Veeams case they use an XML based API for Enterprise Manager.  This means we can expect to see the response body in an xml format – and, if at all we need to create a body to pass to the request, we would need to first form that body in an xml format before we sent it!  Now all of this sounds kind of difficult – but in the end it really isn’t – and you will see that as we create our first script!  Really, there are two key PowerShell specifics we are using….

  • Invoke-WebRequest – this is the cmdlet in which we use to send the API call, passing a uri, method, and sometimes a header
  • XML – this is a simple way to take our response and label/cast it as xml in order to more easily parse and retrieve the desired information from it

So with that said, let’s get scripting…

First Step – Get a SessionId

The first step in any API consumption is usually authentication – and aside from the scopes and methods themselves this is normally where we see the most discrepancies between vendors.  With Veeam we simply send a POST request to the sessionMngr resource type and retrieve a sessionId.  It’s this sessionId which will then need to be included within the header of all subsequent requests to the API – this is how we are identified and authenticated.   Now you could send a get request to the root of the API scope and parse through all of the returned content to find a specific versions uri if you wanted – but I happen to know that we can simply use ?v=latest within Veeam to always use the latest and greatest version.  So let’s go ahead and authenticate against the API and retrieve our sessionId with the following code

$response = Invoke-WebRequest –Uri “http://localhost:9399/api/sessionMngr/?v=latest" -Method "POST" -Credential (Get-Credential)
$sessionId = $response.Headers["X-RestSvcSessionId"]


Looking at the code above we are basically doing a couple of things – first,  we issue our request to the http://localhost:9399/api/sessionMngr/?v=latest to our uri, and also have the system prompt us for credentials as this will be performing the  actual authentication.  And lastly, we parse our returned Headers in the response in order to grab our sessionId.  So if all goes well, you should be left with a string in a similar format to the one shown below stored in our sessionId variable – and now we are authenticated and ready to start requesting…

Now let’s start that job!

So the first example in the REST API Reference is starting a specific job – to do this we first need to get the uri for the jobs resource.  Now we could go ahead and simply look this up in the reference guide as it has all the information (***hint*** its http://localhost:9399/api/jobs) – but where’s the fun in that?  The response we have just received from logging in has all of the information we need to grab the uri programmatically – and, should things ever change we won’t have to rewrite our code if we grab it from the response.  So, to get the proper uri we can use the following one-liner to parse our content as xml and find the correct child node…

$uri = (([xml]$response.Content).LogonSession.Links.Link | where-object {$_.Type -eq 'JobReferenceList' }).Href

Now that we have the proper uri we can go ahead and make a GET request to it to return a list of jobs within Enterprise Manager.  But, remember we have to pass that sessionId through the request header as well – so in order to do this we issue the following commands…

$response = Invoke-WebRequest -Uri $uri -Method "GET" -Headers @{"X-RestSvcSessionId" = $sessionId}

Again, our $response.Content will contain a lot of information, including all of our job names and subsequent metadata with them.  So, in order to find the proper uri for my job (Backup Scoreboard) I can  use the following command to once again retrieve the uri for our next call.

$uri = (([xml]$response.Content).EntityReferences.Ref.Links.Link | Where-object {$_.Name -eq 'Backup Scoreboard'}).Href

Once we have that – we again send a GET request to the new uri

$response = Invoke-WebRequest -Uri $uri -Method "GET" -Headers @{"X-RestSvcSessionId" = $sessionId}

Again, we get a lot of information when looking at our $response.Content – but let me somewhat format it for you below so we can see what we have…


As you can see we have a few different Href’s available to grab this time – each relating to a different action that can be taken on our job.  In our case we are looking at simply starting the job so let’s go ahead and grab that uri with the following command…

$uri = (([xml]$response.Content).Job.Links.Link | Where-object {$_.Rel -eq 'Start'}).Href

And finally, to kick the job off we send, this time a POST request, using the uri we just grabbed…

$response = Invoke-WebRequest -Uri $uri -Method "POST" -Headers @{"X-RestSvcSessionId" = $sessionId}

Now if everything has went as intended we should be able to pop over to our VBR Console and see our job running.  Now wasn’t that way easier than right clicking and selecting Start Smile.  One thing I should note is that we can parse this body as well and grab our taskId for the job we just started – from there we can go ahead and query the tasks resource to figure out its’ status, result, etc..  For those that learn better by simply seeing the complete script I’ve included it below (and in fairness running this script is faster than right-clicking and selecting ‘Start’).  In our next go at PowerShell and the Veeam API we will take a look at how we can instantiate a restore – so keep watching for that…  Thanks for reading!

$backupjobname = "Backup Scoreboard"
#Log in to server
$response = Invoke-WebRequest –Uri “http://localhost:9399/api/sessionMngr/?v=latest" -Method "POST" -Credential (Get-Credential)
#Get Session Id
$sessionId = $response.Headers["X-RestSvcSessionId"]
# Get Job Reference link
$uri = (([xml]$response.Content).LogonSession.Links.Link | where-object {$_.Type -eq 'JobReferenceList' }).Href
# List jobs
$response = Invoke-WebRequest -Uri $uri -Method "GET" -Headers @{"X-RestSvcSessionId" = $sessionId}
# get specific job from list
$uri = (([xml]$response.Content).EntityReferences.Ref.Links.Link | Where-object {$_.Name -eq $backupjobname }).Href
#get job actions
$response = Invoke-WebRequest -Uri $uri -Method "GET" -Headers @{"X-RestSvcSessionId" = $sessionId}
#get start action
$uri = (([xml]$response.Content).Job.Links.Link | Where-object {$_.Rel -eq 'Start'}).Href
#Start job
$response = Invoke-WebRequest -Uri $uri -Method "POST" -Headers @{"X-RestSvcSessionId" = $sessionId}

The concept of “Scale In” for high volume data

When we think of scaling our infrastructure within our data centers a couple different models come to mind – we can scale out, which is essentially adding more servers or nodes into our applications cluster or infrastructure – or we can scale up which involves adding more resources into our already existing servers to support our applications running within them.  But whats with this “Scale In” business?

First, let’s look at the pros and cons of “Scale Out” and “Scale Up”

Scaling-up tends to be a little easier on the licensing side of things and helps with the cooling/power bills for our data centers, however it does impose a bit of a greater risk of hardware failure and tends to come at a cost once we start to hit maximums that a server can hold.

Scaling out, while providing nice upgrade paths and generally giving us an “unlimited” amount of availability leaves a much bigger footprint within our data centers, resulting in higher cooling/power bills and possibly more dollar signs when it comes to license and maintenance renewals.

For our average everyday workloads whether we scale out or up may not have that great of effect on our bottom line – but what about this whole new trend of machine learning and big data analytics.  These types of processes require an extremely high number of resources to process data.  Scaling out and up in these situations certainly has a huge effect on our data center bills – and often, still doesn’t provide us with the data locality and performance we need as it requires us to rely too much on our networks, which in turn, eventually need to scale up to support more data flow – so how do we over come this?

X-IO Technologies may have the answer!

At SFD13 in Denver this June X-IO Technologies invited us into their offices to see what they have been up to for the past little while.  In fact, it had been three years since X-IO last participated in a tech field day event and a lot has changed since then!   At SFD5 X-IO talked about their flagship ISE technology – a general purpose storage array targeted at the mid-market enterprise with the normal features such as performance, availability etc.  Fast forward to today and their story has completely changed – While still providing support for their older product lines, ISE and iglu, X-IO has shifted R&D resources and pivoted into the big-data market with their Axellio Edge solution – a converged storage and server appliance leveraging a lot of power and a ton of NVMe storage in the back end – their own “Scale In” solution!

Hello Axellio

Before delving into exactly how this Axellio Scale In solution performs let’s first take a look at what everyone is interested in – the hardware specs!  Axellio is a converged appliance – meaning it takes both compute, memory, and storage and combines them into one 2U rack mounted appliance.

As far as the compute and memory goes, Acellio contains 2 nodes, each node supporting up to 44 CPU cores and 1TB of RAM – so yeah, do the math – we basically have 88 cores and 2TB of memory to work with here.

That said the biggest benefit to the Axellio in my opinion is the storage back end – Axellio’s back-plane supports up to 72 dual ported NVMe SSD drives.  Currently that brings Axellio’s capacity maximum to 460TB with 6.4GB NVMe drives – in the future, with larger drives, we are looking at a whopping 1 Petabyte of storage – all on bus speed with NVMe performance – think greater that 12 million IOPs with 35 microseconds of latency!

So….back to scaling in.

To help explain their scaling in concept let’s take a look at an actual logical diagram of the Axellio platform.  As we can see Axellio doesn’t appear to function the same as our traditional converged and hyperconverged appliances we see today – the storage, in essence, isn’t distributed – meaning the server nodes do not have their own local storage that they pool together to present globally to a cluster – nor are they addressable by any sort of global namespace.  Although the FabricXpress functionality does allow for inter-node communication to support things like memory mapping back and forth from the nodes – they are essentially, two distinct server nodes.


What we have here, is basically two separate and distinct compute nodes, connecting to the same FabricXpress back-plane and basically both accessing the shared NVMe storage!  As you can start to imagine this is where we see the “scale in” concept come into play – we have the scale out advantages of having two nodes, while also combining the scale up benefits of having a lot of cores and memory – all backed by the blazing speeds of NVMe on the back end!

But the magic is in the software right?

Of course – software rules the world today – but Axellio isn’t providing you with any!  X-IO play with Axellio isn’t to sell you something to run your VMs on, or something in which you can simply pipe your data into some X-IO built analytics engine – this isn’t a general purpose server!  Axellio is basically an OEM box – a box that is targeted at companies and enterprises that need a mass amount of computing and storage performance requirements in order to solve specific problems.  Think things like streaming analytics or in memory big data applications.  In the end, it’s the customer that is left with the choice of how they want to leverage the Axellio platform – meaning they put the OS on the compute nodes, they determine if they want RAID or any other forms of availability with the storage, they decide whether to use each server node independently or to setup some form of HA between the two – the customer is in full control!

One interesting use case they had was an analytics engine where one server node takes on the role of writing the streaming data to the drives, while the other server node provides the compute and access to any real-time analytics that may need to be accessed!  Now – while this use case can be handled many different ways – Axellio does it at very high speeds, very low latency – oh, yeah – and within 2U of rack space!

So in the end I think X-IO technologies is on to something with Axellio – and honestly, it appears to me like they are still “learning” about how they plan to bring this to market!  Currently, they are focusing on providing a hardware platform to a somewhat niche group of players and looking to solve very specific use-cases and problems – a big change from directing all their efforts into the storage array market which is flooded with general purpose vendors.   And rightly so – they need to explore this area and get more and more data and use cases before going down any other roads with Axellio.  Where those roads may lead them is yet to be determined but in my opinion I can see one or two things happening with Axellio – it moves towards a reference architecture model – meaning we get in-depth documentation on how to do things like Hadoop or large scale Splunk deployments with Axellio – or maybe, just maybe X-IO technologies have something in the works in terms of their own software development that they can layer on top of Axellio!

If you want to learn more about X-IO Technologies and Axellio certainly check out their website here.  You can also find their SFD13 recorded presentations here – If you want to get really nerdy I’d suggest watching Richard Lary talk about dedup and math!  And of course don’t forget to check out the posts from fellow delegates Brandon Graves, Dan Frith, and Ray Lucchesi as well!  Thanks for reading!

Is there still a need for specialized administrators?

We have been hearing the cliché’s for quite some time now within the technology industry.  Sayings like “Breaking down silos” and “Jack of all trades, Master of none” have been floating around IT offices for the past 5 years – and while I believe that these sayings certainly hold some clout I still have my doubts about this new “Generalized IT Admin”.  Honestly, with the changing landscape of technology and the fast paced change we see introduced into our infrastructure by all means we need to know (or know how to quickly learn) a lot – A LOT.  And while this generalized, broad skill set approach may be perfect for the day to day management of our environments the fact is, when the sky clouds over and the storm rolls in, taking with it certain pieces of our data centers we will want to have those storage specialists, that crazy smart network person, or the flip-flop wearing virtualization dude who knows things inside and out available to troubleshoot and do root/cause on issues in order to get our environments back up and running as quickly as possible!

Just a disclaimer of sorts – every article, comment, rant or mention of SFD13 that you find here has been completed on my own merits. My travel, flights, accommodations, meals, drinks, gum, etc were all paid for by Gestalt IT, however I’m not required or obliged to return the favor in anyway other than my presence 🙂 – Which still feels weird to say 🙂 Well, my presence and possible a little bit of Maple Syrup.

Now all this said these problem situations don’t come (I Hope) that often and coupled  with the fact that we are seeing more and more “converged” support solutions, organizations can leverage their “one throat to choke” support call and get the specialists they need over the phone – this all leads them one step closer to being able to employ these “Jack of all trades, Master of none” personnel in their IT departments.  But perhaps the biggest stepping stone in eliminating these specialized roles is the new rage being set forth by IT vendors implementing a little concept called “Policy Based Management”.

Enter NetApp SolidFire

Andy Banta from NetApp SolidFire spoke at Storage Field Day 13 about how they are utilizing policy based management to make it easier and more efficient for everyday administrators to consume and manage their storage environments.  I got the chance to sit as a delegate at SFD13 and watch his presentation, cleverly titled “The death of the IT Storage Admin” – and if you fancy you can see the complete recorded presentations here.

NetApp SolidFire is doing a lot of things right in terms of taking the steps to introduce efficiency into our environments and eliminate a lot of those difficult mundane storage tasks that we used to see dedicated teams of specialized administrators perform.  With that said let’s take a look at a few of those tasks and explore how NetApp SolidFire, coupled with VMware’s VVOL integration is providing policy based automation around them.

Storage Provisioning

In the olden days (and I mean like 5 years ago) the way we went about provisioning storage to our VMware environments could be, how do I say this, a little bit inefficient.  Traditionally, we as “Generalized VMware administrators” would determine that we needed more storage.  From there, we’d put a request out to the storage team in order to provision us a LUN.  Normally, this storage team would come back with all sorts of questions – things like “How much performance do you need?”, “How much capacity do you need?”, “What type of transport mechanism would you like this said storage to be delivered over?”, “What type of availability are you looking for?”.  After answering (or sometimes lying) our way through these conversations the storage team would FINALLY provision the LUN and zone it out to our hosts.  We, then create our datastore, present it to our ESXi hosts and away we go filling it up – only to come back to the storage team with the same request the very next month.  It’s not a fun experience and is highly inefficient.

VMware’s VVOLs is a foundation to help change this and NetApp SolidFire has complete integration points into them.  So in true VVOLs fashion we have our storage container, which consumes space from our SolidFire cluster on a per VM/disk fashion.  What this means is as administrators we simply assign a policy to our VM, or our VM disk and after that our vmdk is provisioned automatically on the SolidFire cluster – No LUNs, no storage team conversations – all performed by our “generalized admin”.

Storage Performance/Quality of Service

Now as far as VVOL provisioning capacity goes there isn’t a whole lot that is different between SolidFire and other IT storage vendors – but when we get into QOS I think we can all agree that SolidFire takes a step above the crowd.  SolidFire has always focused on the point that application performance and quality of service is the number one most important piece of their storage – and with their VVOL implementation this is still true.

When setting up our policies within the vSphere SPBM, NetApp SolidFire exposes a number of metrics and configuration options as it pertains to QOS in our rule setup.  We can configure settings allowing us to set minimum, maximum and burst IOPs on both our data VVOL(the vmdks) as well as our configuration VVOLs (vmx, etc.).  Once setup we simply apply these policies to our VMs and immediately we have assurance that certain VMs will always get the performance they need – or, on the flip side, certain VMs will not be able flood our storage, consuming IOPs and affecting their neighboring workloads.  This is a really cool feature IMO – while I see a lot of vendors allowing us to do certain disk type placement for our VVOL (placing vmdk on SSD, SAS, etc.) I’ve not see many that go as deep as SolidFire allowing us to guarantee and limit IOPs.

This essentially removes the complexity of troubleshooting storage performance needs and constraints on our workloads – the setup is all completed within the familiar vSphere Web Client (complete with a NetApp SolidFire plug-in) and is simply applied the same way as you have always just edited a VMs settings.

So – is the storage admin dead?

NetApp SolidFire has definitely taking a good chunk of the storage admins duties away and put them into the laps of our generalized admins!   Even though I haven’t mentioned it, even the scaling of NetApp SolidFire cluster, as well as the VASA provider failover is all automated in some way within their product.  So, yeah, I think they are on the right track – and they have taken some very difficult and complex tasks and turned them into a simple policy.  Now I wouldn’t jump to conclusions and say that the storage admin is 100% dead (there is still a lot storage complexities and a lot of storage related tasks to do within the datacenter)  but NetApp SolidFire have, how do I put this – maybe just put them into a pretty good coma and has them lying in a hospital bed!   If you have made it this far I’d love to hear your take on things – leave a comment, hit me up on twitter,  whatever…  Take a look at the NetApp SolidFire videos from SFD13 and let me know – do you think the storage admin is dead?  Thanks for reading!

Top vBlog voting is underway

The shear amount of blogs listed over on Eric Siebert’s vLaunchpad is simply amazing!  I don’t know how many there is listed there – but there is certainly a lot of scrolling that needs to happen in order to get to the bottom – it’s awesome to see just how much information is being shared with each other from the virtualization community!  Props to everyone for that!  And props to Eric – I sometimes struggle with setting up links within my blog posts, let alone tracking the RSS and twitter profiles of all of those blogs/bloggers.  Every year Eric keeps this list up to date – ensuring blogs are current, active, and categorized – all with the intentions of hosting the annual Top vBlog voting!

Well – that time is now.  It’s time to go and show your support for the blogs, news sites, and podcasts out there that help guide you through your daily job or spark your interests into new technologies.

This year we see some changes for the better to the contest – firstly, blogs with 10 posts or less during 2016 will not be listed on the ballot – ensuring that we are only voting for those who put forth the time and effort of releasing content.  Secondly, pubic voting will not be the sole measurement in the ranking of the blogs.  Sure, your opinion will still hold the majority of the ranking at 80%, however the remaining 20% will be split between the number of posts published on the blog, along with the Google PageSpeed score of the blog – forcing bloggers to sharpen up their web hosting skills and try and optimize their site.

So with that – if you have found a blog post particularly useful through out the year or enjoyed reading a particular community members blog – go and vote and support them!  In all honesty, it’s not as if there is a massive prize or anything at the end, but I can say, as a blogger, I enjoy looking at the results and seeing where people have ranked, as well as where I rank among them!  For me, it’s a humbling experience to even be listed!  So big thanks to Eric for tallying up all these votes and handling all of the category submissions and everything!  I know that it’s not for the faint of heart!  And also, huge thanks to Turbonomic for supporting the Top vBlog this year!  If you are looking to right size your environment, migrate to cloud, or simply get the most bang for your buck no matter where your workloads live I would recommend checking out what Turbonomic has to offer!  And, when you are done, go vote & Make Virtualization Great Again!

The StarWind Cloud VTL for AWS and Veeam

When companies are approaching a data protection strategy something dubbed the “3-2-1 rule” often comes up in conversation.  In its essence, the 3-2-1 rule is designed as a process to ensure that  you always have data availability should you need it.  That’s 3 copies of your data, on 2 different media types/sets,  with one being located offsite.  Now, when looking at taking this rule and applying it to our data protection design the subject of tape is usually discussed as it facilitates that second type of media we need to satisfy the “2” portion.  Tape has had a  play in  data protection for a long time but the mundane tasks of removing a tape and inserting another just doesn’t fit well inside of our modern datacenters.  When its time to restore we are then left with the frustration of finding the proper tapes and then the slow performance of moving data off of that tape back into production.  That’s why companies like StarWind initially built what is called the Virtual Tape Library (VTL).   The StarWind VTL mimics that of a physical tape library, however instead of requiring the manual intervention of removing and loading tapes it simply writes the data to disk.

The StarWind VTL is nothing new – in fact its’ been around since 2009.  But just  this past month at VeeamON StarWind announced yet another version of their  VTL, only this time, instead of just writing the data to local disk they now have  the option additionally sync those virtual tapes to the cloud.  The software, called StarWind Cloud VTL for AWS and Veeam couldn’t come at a more opportune time as only a week before the announcement “WannaCry” was worming its way through Europe, encrypting both production  and  backup data – leaving those companies without some sort of offsite air-gapped backup without a  whole lot of option.


So how does it work?

The StarWind Cloud VTL for AWS and Veeam is 100% software based – therefore no extra hardware or appliances are need to be racked and stacked in your datacenter at all.  In fact, for convenience and cost reasons StarWind Cloud VTL can even be installed  directly along side with your Veeam Backup and Replication Backup server.  If you have ever installed any other StarWind products then the Cloud VTL setup will look very similar, utilizing a very easy to use wizard type installation.

Once installed, the configuration (as shown below) is really just adding our virtual tape device (drive) and however many number of virtual tapes we want.  As we can see, StarWind actually mimics the HPE MSL8096 Tape Library – therefore we may need to pull down any appropriate device drivers in order to support it.  Once installed we are left essentially with an iSCSI target that points to our VTL, which in tern maps to local disk.

Library-Location StarWind-HPEMimic

Cloud-Replication So by now you might be thinking “Hey, these tapes are mapped to disk not cloud” and you  are absolutely correct in that thought.  StarWind Cloud VTL implements what they call a Disk to Disk to Cloud process – meaning data is first copied to disk (StarWind) from disk (Production) and then further replicated to Cloud (Amazon S3/Glacier).  This scenario allows for the actual Veeam tape job to complete much faster as it’s simply streaming to local disk- after which, the data is replicated to Amazon.  To set this up we simply need to click on the ‘Cloud Replication’ option (shown right) within the StarWind management console and provide our access and region information for our S3 bucket.

Above I hinted at yet  another feature of the StarWind  Cloud VTL with the mention of Glacier.  As shown below we can see a few options as it pertains to our retention – the most interesting being the ability to migrate our tape data out of S3 and into the cheaper, more archive suitable Glacier service after a certain period of time.  This tiering feature allows us to keep costs down by essentially staging and de-staging our backup data depending on age to a lower tier, lower performance storage while keeping our most  recent restore points on a more reliable, higher performance cloud storage service.


We can also see that we have options surrounding when to purge the local on-site tape data as well as how long to  wait after the virtual tape has been ejected locally before we start the replication to S3.

That’s really it as far as the StarWind setup is concerned.  The only thing left to do now is setup our the VTL as a tape server with  VBR.  Now before we can do this we will first need to establish a connection to our VTL.  This, just as is done through  StarWind Virtual SAN is simply just an iSCSI target that is mounted with the standard Windows iSCSI tools.  As mentioned previously, the VTL mimics a HPE MSL8096 so be sure those drivers are downloaded and installed to ensure the VTL can be discovered.

For the VBR  configuration we simply add our StarWind VTL we have setup into our backup infrastructure as a  “Tape Server”.  After doing so we should be able to see all of our virtual tapes that we have created and can simply setup our tape jobs or File to Tape jobs just as we always have within Veeam – only this time, our tapes are basically being  replicated to S3.

In the end I think StarWind is on to something here!  This is their first go at cloud replication and I’m sure there is much more to come.   In fact we have already seen the addition of Microsoft Azure blob storage into the StarWind Cloud VTL portfolio so things are moving quickly.  The idea of still achieving the ultimate goal of the 3-2-1 rule while not having to physically mess around with tape is appealing – not to mention that by utilizing cloud we can get that offsite scalable storage tier without all the need to manage or update or even procure the hardware.  Personally I can see Veeam shops jumping on this.  It certainly makes that ideal environment of having some uber fast backup repository for your most recent backups on-stie and leaving StarWind and AWS with the job of migrating and managing the more “cold” archival type data up to the cloud.  Remember you  don’t want to be “that” IT shop that can’t recover from the next piece of ransomware that comes down the pipe.  If you would like to give StarWind Cloud VTL for Amazon and  Veeam a shot you  can pick yourself up a  free 30 day trial here.

Turbonomic 5.9 adds visibility into YOUR cloud!

As of late I’ve been making it somewhat of a personal goal to try to learn more about cloud – AWS in particular.  I’ve been going through the training over at, messing around with the free tier in AWS, and toying with the possibility of writing my AWS Certified Solutions Architect Associate exam.  Now, one thing that I have learned over the past couple of months is that AWS is a beast – there is a lot of services provided – and gaining visibility into these services, from both a cost and performance aspect seems next to impossible.  Now this post isn’t going to be focused around my struggles, but more so on how Turbonomic (formerly VMTurbo), more specifically the recently announced 5.9 version, can help organizations to bridge that visibility gap and achieve that ultimate goal of achieving maximum performance at a minimum cost.

Turbonomic 5.9 – Making Hybrid Cloud possible.

Although this is a minor release it certainly does come with some major enhancements to the product in terms of cloud integration.  Turbonomic has always done a great job at monitoring our on-premises environments – ensuring that VMs and services are right sized and running in the most cost efficient way, yet ensuring that performance and SLAs have been met.  Their supply-demand analytics engine is next to none when in it comes determining these placements, automatically resolving issues, and providing an instant ROI to organizations datacenters.  That said more and more organizations are now looking to move away from housing their own datacenters and investigating cloud enabled solutions, be it public, private, or a hybrid model – and, in a typical customer fashion – we really want to use the same tools and concepts that we are used to.  Turbonomic 5.9 seems to deliver on this expectation with the addition of a number of cloudy features to the product (summarized below)

  • Cloud Migration Planning – 5.9 gives us the ability to perform very in-depth cost analysis of moving our workloads to the public cloud.  IE. What would it cost me to move workload x to Amazon?  What would the costs be with migrating workload a and b to Azure?  What’s the cost comparison of migrating workload x from this AWS region to this Azure region?  Getting cost estimates from Azure, AWS, and SoftLayer in regards to these questions is very beneficial when performing feasibility studies around cloud adoption and migration.
  • Workload Specific Costing – Once we have our workloads in the cloud, Turbonomic will now track and report cost metrics, in real-time back to the dashboard.
  • Cloud Budgeting – Imagine setting a defined budget for your cloud services and seeing just how that budget is being consumed across the different regions, tags, and workloads defined with in it.  Aside from seeing your real-time budget impacts, Turbonomic will also take into account past costs in order to project future cloud consumption costs based on your growth and performance needs.  Also, if you have some sort of discounted account or agreement with either of the cloud providers, Turbonomic uses your credentials – so they are getting YOUR actual costs – not industry averages!
  • Lower Cloud Costs – This is really what Turbonomic is about IMO – ensuring you a reaching maximum performance at the lowest cost – and now we see this in the cloud as well.  Think about gaining visibility into what it my cost to scale up to a larger instance, or how much you can save by scaling down.  Turbonomic can predict these costs as well as even automatically scale these instances down, or better yet, suspend them during times they aren’t needed.

So yeah – all the benefits of the previous version of Turbonomic is now applicable to cloud – allowing organizations to get that “single pane of glass” cost viewing of both their on-premises workloads next to their AWS, Azure, or SoftLayer workloads as well!  Certainly these aren’t the only enhancements that have been released with 5.9 – we are also blessed with some pretty hefty performance impacts to the analytics engine as well – think 9 minutes to analyze and report on 100,000 VMs – not too shabby.  Also, as highlighted during their TFD presentations recently – the HTML5 interface is currently running in “dual” mode – with the intention of having all functionality fully available by the end of 2017!  But to me, the meat and potatoes of this release revolve around cloud.  Turbonomic answers a lot of the costing questions that come with cloud – and from what they claim, can lower your cloud bill by an average of 30%!  That should enable for a very fast ROI for organizations!  If you want to read more about the new features as I haven’t covered off all of them, definitely check out the Turbonomic “What’s New” page!  Also Vladan Seget has a great round up on his blog as well as Dave Henry on his!  And hey – if you want to check it all out for yourself you can grab yourself a free 30 day full featured trial of Turbomonic here!


SNIA comes back for another Storage Field Day

SNIA, the Storage Networking Industry association is a non-profit organization that is made up of a number of member companies striving to create vendor-neutral architectures and standards throughout the storage industry.  Think Dell, VMware, HPE, Hitachi, all the likely names all behind closed doors working for the greater good.  Ok – that’s their definition.  Mine?  Well, I compare it to Rocky III – you know, Rocky and Apollo, sworn enemies teaming up to make the world a better place by knocking out Mr. T.   So, I may be a little off with that, but not that far off!  Replace “Rocky and Apollo” with some “very big name storage companies” and swap out “knocking out Mr. T” with “releasing industry standards and specifications” and I think we are pretty close.

Just a disclaimer of sorts – every article, comment, rant or mention of SFD13 that you find here has been completed on my own merits. My travel, flights, accommodations, meals, drinks, gum, etc are all paid for by Gestalt IT, however I’m not required or obliged to return the favor in anyway other than my presence 🙂 – Which still feels weird to say 🙂 Well, my presence and possible a little bit of Maple Syrup.

So, in all seriousness SNIA has been around for 20 years and was formed initially to deal with interoperability issues surrounding networked storage.  Today we now see them really focusing on architectures, standards, as well as a slew of education services, training and certifications.    Today we can see a ton of work being performed by SNIA around current storage trends such as flash, cloud, object-storage, persistent memory, etc..  You name it, they have some work being done around it.    From their website here is a handful of work that SNIA is currently investigating…

  • Cloud Data Management Interface (CDMI)
  • Linear Tape File System (LTFS)
  • IP Based Drive Management Specifications
  • NVM Programming Modle
  • Self-contained Information Retention Format
  • Solid State Storage Performance Test Specifications
  • Swordfish Scalable Storage Management APIs

Wait!  They aren’t selling anything!

Honestly, I’ve never been to a Tech Field Day event where a non-profit organization has spoke – so I’m very excited to see what SNIA will chose to talk about!  As shown above they have a broad range of topics to chose from – and by watching past SNIA videos from TFD they can go quite deep on these subjects.  It will be nice to hear a vendor-neutral approach to a TFD session.  I applaud SNIA for their efforts – it can’t be easy organizing and keeping all of its members in check – and it’s nice to see an effort from a company, be it non-profit or not, looking out for the customers, the partners, the people that have to take all of these storage arrays and protocols and make them all work!  As always, follow along with all my SFD13 content here – keep your eye on the official event page here – and we will see you in June!

X-IO Technologies – A #SFD13 preview

In terms of the technology sector we always joke that when startups are 5 years old that sometimes makes them legacy!  Meaning, 5 years is a long time in the eyes of a technologist – things change, tech changes, new hardware emerges.  All of this drives change!  Well if a 5 years makes a mature company than I’m not sure what to call X-IO Technologies.  X-IO was founded nearly 25 years ago, in 1995 – taking them right off the scale in terms of aging for a tech company!   Honestly, I’ve heard the name before (or saw the logo) but I’ve never really looked at what it is X-IO does – so today let’s take a look at the current X-IO offerings and solutions and what they have to bring to the table – and, if you are interested you can always learn more when they present at the upcoming Storage Field Day 13 event in Denver come June 14th – 16th.  But for now, the tech…

Just a disclaimer of sorts – every article, comment, rant or mention of SFD13 that you find here has been completed on my own merits. My travel, flights, accommodations, meals, drinks, gum, etc are all paid for by Gestalt IT, however I’m not required or obliged to return the favor in anyway other than my presence 🙂 – Which still feels weird to say 🙂 Well, my presence and possible a little bit of Maple Syrup.

What does X-IO bring?

From their website it appears that X-IO have a couple basic offerings, all hardware appliances, and all serving different points of interest in the storage market.  Let’s try and figure out what each of them does..

Axellio Edge Computing

This appears to be an edge computing system marketed mainly companies needing performance for big data analytics as well as those looking for a platform to crunch data from IoT sensors.   These converged storage and compute boxes are very dense in both CPU, Memory and Storage.  Supporting up to 88 cores of CPU, 2TB of memory, and a maximum of 72, yes 72 2.5” NVMe SSD drives.  Each appliance is basically broke down into two server modules for the compute and memory, as well as up to 6 FlashPacs (A FlashPac is essentially a module hosting 12 dual ported NVMe slots).  As far as scale goes I don’t see much mention in terms of pooling appliances, so it appears that these are standalone boxes each serving a single purpose.

iglu Enterprise Storage Systems

Here it appears we have a storage array.  The iglu storage system can be built using all flash, or a mixture of both flash and disk, or, just spinning disk itself.    They appear to have multiple models supporting each disk configuration, with their all-flash version supporting over 600,000 IOPs.  Controllers on the iglu system are distributed, meaning whenever we add more capacity we are also adding more controllers, thus increasing both space and performance with the same upgrade.  As far as software goes we see all the familiar features such as snapshots, CDP, replication, stretched clustering features,  integration with VMware, SQL, Oracle, etc.…  One nice aspect to this is that all iglu systems, no matter the model, have access to all of the software features – there is no having to license individual aspects of the software.

I’m excited to see what X-IO has to say at SFD13 come this June.  There was some mention of some unique way of handling drive failures, as well as offering a lengthy 5 year warranty on everything which may separate them from the storage vendor pack – but I’m hoping they have much more to talk about in regards to their storage offerings to give it that wow factor!  As always you can find all my SFD13 related information here or follow the event page here to stay updated and catch the live stream!