Tag Archives: VFD5

Ravello Systems – Inception without the kick!

Ravello-Systems-LogoIf you have at all visited this blog in the last 4 or so months you shouldn’t be surprised to hear that I’m a pretty big Ravello Systems fan!  I was part of their beta for nested ESXi and I’ve wrote about my thoughts on that plenty of times.  With the beta out of the way and access granted to all the vExperts, Ravello Systems took hold of the clicker at VFD5 in Boston for their first of what I hope is many Tech Field Day presentations.

Disclaimer: As a Virtualization Field Day 5 delegate all of my flight, travel, accommodations, eats, and drinks are paid for. However I did not receive any compensation nor am I required to write anything in regards to the event or the sponsors. I  have also been granted early access to Ravello Systems ESXi beta in the past as well, and have received free access as a vExpert.  All that said, this is done at my own discretion.

As I mentioned earlier I’ve written plenty about what I’ve done utilizing Ravello Systems.  The platform is great for configuration validations, home lab type stuff, and for just exploring different functionality within vSphere.   You know, user type stuff.  At VFD5 Ravello went pretty deep in regards to how their software functions within Google and AWS, so I thought I’d take a different approach and try and dive a little deeper into how their technology functions this time around…to the point that my brain started to hurt.

HVX – A hypervisor that runs hypervisors, designed to run on a hypervisor – huh?!?!

Ravello’s magic sauce, HVX is custom built from the ground up to be a high performance hypervisor to run applications (and other hypervisors) while in itself running on a hypervisor (in public cloud).  To say Ravello would know a thing or two about developing a hypervisor would be a major understatement – Ravello’s co-founders, Benny Schnaider and Rami Tamir were once the co-founders of another start-up called Qumranet.  You know, the same Qumranet that originally authored this little known thing called the Kernel-based Virtual Machine, or better known as….yeah, KVM.  So needless to say that have a little experience in the hypervisor world.

The first dream within a dream

As we know Amazon’s EC2 is essentially an instance of Xen, where-as Google’s Cloud utilizes KVM.  So when we publish our application inside of Ravello we essentially deploy an instance of HVX, installed within a VM that has been spun up on either Xen or KVM – once our HVX hypervisor has been instantiated on our cloud hypervisor, then our images or VMs within Ravello are deployed on top of HVX.  So even without yet touching ESXi within Ravello we are 2 levels deep!  Now in terms of a native ESXi deployment we know that we can take advantage of common virtualization extensions such as Intel-VT and AMD SVM, however in HVX, since we have already been abstracted away from the hardware by the Cloud hypervisor we don’t have these – instead, HVX implements a technology called Binary Translation to translate any executable code from the guests that is deemed “unsafe”.  Coupling this with something called Direct Execution, which basically allows any code that need not be translated to run directly on the CPU.  Honestly, if you want to dive deeper into binary translation and direct execution Ravello has a great blog outlining it in a lot more detail than can fit into my maple syrup soiled, hockey statistic filled Canadian brain.  Aside from the performance features, HVX also passes up emulated hardware – the same hardware that we as VMware administrators are all used to – things like PVSCSI, VMXNet3, LSI, etc – this is all available to our guests running on top of HVX, even to our guests running on top of our ESXi guests on top of HVX – I know right!


So, what actually happens when we click that ‘publish’ button from within the Ravello interface is somewhat unique – we know we need to install HVX into our cloud VM but how many instances of HVX actually get deployed?  I’m not going to try and understand their algorithms around how they size their hypervisor but I’m just going to say it depends on the resource allocation on the VMs within your application.  You could end up with a single VM running on one instance of HVX or you could end up with 6 VMs running on 2 instances of HVX – however the deployment scenario plays out you can be ensured that it will in only be VMs belonging to that single application that get deployed on the HVX instances – no VMs from other peoples applications, not even any VMs from other applications that you may have.

That networking though!

Perhaps one of Ravello’s major strong points is how it exposes a complete L2 network to the applications running on top of it!  By that I mean we have access to everything L2 provides, we have services available such as VLANS, broadcasting, multicasting, etc within the overlay network Ravello implements.  As we mentioned before, depending on the size of the application being deployed, we may or may not have multiple instances of HVX instantiated within the cloud provider.  If we are limited to a single HVX instance, then the networking is “simple” in terms that it doesn’t have to leave their hypervisor – all switching, routing, etc can be performed within the one HVX instance.  However when an application spans multiple HVX instances creative technologies come into play as shown below.  Ravello has essentially built their own distributed virtual switching mechanism which can tunnel the traffic between HVX instances or Cloud VMs via UDP connectivity.


And storage…

The last challenge as it pertains to running Ravello applications inside the cloud comes in terms of storage performance.  Having HVX slotted in-between the the running applications and AWS allows Ravello to take advantages of the object storage capabilities of S3, but yet still present the underlying storage to the VMs as a block device.  Essentially, when we import a VM into Ravello Systems, it’s stored in it’s native format on top of HVX and that appears to be a block device, but under the covers the HVX file system is essentially storing this information in object storage.  Aside from all this abstraction HVX implements a Copy-on-write file system, delaying the actual allocation of storage until it is absolutely needed – in then end we are left with the ability to take very fast snapshots of our images and applications we deploy, easily duplicating environments and allowing people like myself to “frequently mess things up Smile


The Ravello presentation at VFD5 was one of my favorites from a technology standpoint – they did a great job outlining just what it is they do, how they do it, and how they are chosing to deliver their solution.  There was some questions around performance that were met head on with a whiteboard and overall it was a great couple hours.  Certainly check out some of the other great community posts below centered around Ravello to get some more nested goodness..

Ravello has a great product which honestly completely blows my mind when I try and wrap my head around it – We have our VMs, running on ESXi, running on HVX, running on Xen, running on some piece of physical hardware inside an Amazon data center – attaching to both Amazon EBS and S3 – we are snapshotting these things, saving as blueprints, redeploying to Google Clouds which completely flip the underlying storage and hypervisor!!  It’s exporting VMs out from our current vSphere environments and deploying them into the public cloud, complete with all of their underlying networking – already setup for you!  Ravello has coined their nested virtualization capabilities as Inception, and if you have ever seen the movie I’d say it certainly lives up to the name.  It has this magic about it – where you are in so deep yet still in control.  If you have a chance check out their VFD5 videos and sign up for a free trial to check them out for yourself.

VMTurbo – allowing smart people to do smart things

VMTurboLogoSmLet’s face it, our environments now are way more complex than they were 10 years ago!  Although some tasks and components may be easier to work with and not quite as specialized, we have a lot of them – and they all need to work, in perfect harmony, together.  The problem with this is at times we get a couple members of the choir that get a little out of key – CPU starts screaming, network gets chatty and next thing you know we have an environment that’s screaming out of control, CPU start shoving network, network starts over drowning memory and to be quite honest, pretty much everyone in the choir at this point sounds like $@#!.

Although this scenario may sound a little far-fetched or a wee bit out there – I mean, CPU can’t sing we all know that!  Either way you put it any choir needs a conductor, a leader, someone who overlooks the complete environment, instructing certain members to gear down, and others to ramp up – Last month in Boston at VFD5, VMTurbo showed us just how they can wave the baton when it comes to bringing together the components of enterprise IT.

Disclaimer: As a Virtualization Field Day 5 delegate all of my flight, travel, accommodations, eats, and drinks are paid for. However I do not receive any compensation nor am I required to write anything in regards to the event or the sponsors. This is done at my own discretion.

Having just saw VMTurbo at #VFD4 in Austin only 6 months prior I was skeptical as to what they would have to talk about in Boston, thinking it was mainly going to be the same presentation – I was wrong!  They could of easily filled another 4 hours talking about the new features that they have embedded into the new release of their flagship product, Operations Manager 5.2.


Traditionally VMTurbo has gathered its’ market intelligence by polling and retrieving statistics and metrics from the hypervisor – while this is a good technique and is used by many monitoring solutions today there are some applications which don’t necessarily work well in this scenario.  Applications which look after their own resource usage – think SQL, Java Heaps, etc. – these applications may not quite properly reflect their true usages in the hypervisor layer.  For this reason VMTurbo has released an Application Control Module (ACM), which completely integrates into their entire supply/demand model of monitoring.  To help put it into perspective let’s have a look at SQL – ACM essentially brings in statistics around transactions, response time, database memory usage, etc – All items which are not available within the hypervisor itself.


From here, VMTurbo users are able to define QoS policies, or SLA’s around their applications performance.  Think I need x number of milliseconds for response time.  VMTurbo then looks holistically at your environment – it knows about the infrastructure underneath the app, what resources are available – it now knows about how that application is configured, memory management etc.  With all of this knowledge VMTurbo can then configure your environment and your application to a desired state, one that we know we are running efficiently, while meeting those SLA’s and QoS policies we have setup in regards to the application!


imagesAside from applications VMTurbo has been busy with a few other cool benefits as well!  With the adoption of public and hybrid cloud on the rise, they’ve seen a need to introduce a lot of enhancements in terms of networking – for example, knowing the physical location of applications is key in terms of best placing “chatty” applications close to each other in order to reduce latency, while still maintaining their “desired state” in terms of both CPU, Memory and Storage as well.  They do this by grouping chatty applications together in what they call a vPOD.  From there OM implements Netflow in order to discover your physical switching configuration, and can work to ensure that vPODs are grouped together on the same top of rack switch or the same public cloud region, etc, moving the entire vPOD if one application requires more resources.


Just as VMTurbo has made steps to get more information out of the application stack they are doing the same with storage!  By completely understanding the storage array underneath your infrastructure, OM is able take action on overcoming storage issues around capacity and performance.  Think of things such as knowing whether to expand a current volume or deploy a new one!  OM understands almost everything there is to know about your infrastructure and applications, and can therefor make the best decision on how to meet the SLA’s defined on those applications from a storage standpoint – one time it may make sense to simply grow a volume, while other times due to other applications running on that same volume it may be more efficient to create a new volume and migrate the application in question.

VMTurbo has certainly taken a unique play on monitoring and resolving issues with your environment.  This whole economic market play – with supply/demand being applied to your infrastructure and applications, is different, but honestly makes sense when looking at resource utilization.  I like how Operations Manager has been built – this modular approach allows them to come out with new features such as the application and storage modules, and simply plug them into the product, where they just simply inherited into the supply chain module and analytics can immediately be applied to them.  And as of now you can do it all from your own cloud on AWS!

If you want to watch the VMTurbo videos yourself you can do so here – or check out my complete VFD5 page here.  Also, we have had some other great community posts around what VMTurbo spoke about – be sure to check out each of them below as each delegate seemed to write about a different part of their presentation…

Operations Manager can certainly do some amazing things, allowing you to automate things such as moving an application to the cloud based on it’s supply/demand analytics – which at first sounds a bit scary – but hey, it wasn’t that long ago that people were weary of enabling DRS right?!?

Rubrik brought Apple simplicity and Google scale together, you’ll never guess what happens next…

logo-large-gray-wpcf_100x48Finally I have figured out a way to incorporate techniques from the vast amounts of click-bait on my Facebook into this blog!  Did it work?

In all seriousness though Rubrik, who recently presented at Virtualization Field Day 5 in Boston have brought together both the simplicity of Apple and the scale of Google and what happened next was a scalable, converged backup and recovery solution containing both hardware and software.

Disclaimer: As a Virtualization Field Day 5 delegate all of my flight, travel, accommodations, eats, and drinks are paid for. However I do not receive any compensation nor am I required to write anything in regards to the event or the sponsors. This is done at my own discretion.

Now, all the time I hear people moan about backup solutions, shrug them off and say they are boring – I for one am not one of those people – I find the backup and recovery space very interesting and honestly there has been a ton of innovation in this space over the last 10 years that has made backup solutions more than just a simply copy of your data – it’s been quite fun to watch…but backup and recovery has a lot of players in the game right now so what really can Rubrik bring to the plate?

Rubrik – Time machine for Cloud Infrastructure

The biggest and most prominent difference that sets Rubrik apart is that they are one of really only a few vendors that ship their backup solution as a physical appliance.  While most backup vendors ship a software solution and leave it up to the customer to decide what to use for their backup storage, Rubrik is scale-out solution that comes with both the software and hardware.  There first shipping product is the R300, supporting roughly 200 VMs with around 100TB of storage available.


With control of the storage as well as the software Rubrik is able to leverage it’s various storage tiers in order to make both the backup and recovery process more efficient.  Take Instant VM Recovery for example – although this is not a new feature, we’ve seen a lot of backup solutions integrate the ability to simply power-on their VMs directly from a backup file, Rubirk can go once step further by granting flash resources to the instantly restored VM as they seem fit – essentially turning your backup appliance into a tier 2 storage array, instantly restoring a new copy of your VM – allowing you to run the VM on the Rubrik appliance with full storage resources until you see a fitting time to migrate it back to your production array.  During the backup process Rubrik is also able to ingest the initial bits into their flash tier, allowing a much quicker response back to vCenter and keeping the amount of time snapshots are open very minimal.  From there data is deduplicated and compressed inline before being written down to another tier of disks or even into the cloud

Wait did you say cloud?

Another nice feature that Rubrik has built-in is Amazon S3 integration.  Essentially when you set policies on a per VM basis you not only specify the number of restore points you want on disk, but you can also state that data which is x number of years old be maintained inside of Amazon.  Thus providing customers with somewhat of an archiving solution, and saving money on needing to purchase additional local storage for these purposes.  When it comes time to restoring data from Amazon, Rubrik has technology built-in to simply pull individual files down from Amazon, without having to pull down the entire VM or vmdk– certainly saving an organization a bit of money in transport costs if you only need to get at individual files.

If you are looking for more delegate and community opinions in regards to Rubrik and the technology they are creating I’d recommend the following…

Rubrik definitely has some awesome technology built into their 1.0 product which is heavily focused around their metadata foundation.  Having control over both the hardware and software has certainly gave them many advantages; fast search, quick recovery and more control over how the backup data gets laid out on their tiered storage platform.  They do have some areas where they are behind, things like object level recovery such as emails, active directory objects, databases, etc.  but to me the real key on the success of Rubrik will hinge around pricing – they told us it’s not going to break the bank – but honestly, a lot of the time backup solutions get thrown down to a second tier budget line and are not in the forefront of strategic IT decisions – so having to drop a large sum of money on insurance policy may not be in the cards for some IT shops.  That said, it’s 1.0 and they have done a lot in the last year and a bit they have been in existence – including adding some really smart and well known people to their staff.  Rubrik is certainly a company to keep an eye on in the coming year as they have built a solid, efficient foundation on which they layered their backup solution – So far they have “taken the backup out of recovery” and I can’t wait to see what they do next…

You can see more of my Virtualization Field Day 5 coverage here, as well as all of the recorded streams here.

Removing disaster recovery from your business with OneCloud

med-vert-notag-wpcf_93x60Let’s face it!  We all have enough to worry about with our production environments!  We’ve got servers, storage, switches, fabrics, subnets, hypervisors, cabling – ugh cabling!  Anyways the point being is that having to worry about all of the complexity and components within our production data center is enough to drive any IT pro crazy – and yet we are still asked to magically duplicate all these worries in a secondary location for disaster recovery purposes – in the words of my 6yo “no fair”

Sure we’ve seen virtualization come along and make our lives easier – and in turn it’s made disaster recovery easier as well.  Encapsulating our physical servers into a group of files and abstracting hardware, aka a VM has opened up so many doors for us to simply pick up and move that VM elsewhere.  The problem with DR is that there is still so much more to consider – things like what do our networks look like in our secondary site, do our VMs get re-ip’d during a fail-over, and one of the most important aspects – when we fail-over how do we ever get back?  Wouldn’t it just be nice to simply take the worries of disaster recovery right out of your business?

OneCloud is all it takes!

OneCloud Software took the stage at VFD5 in Boston this June and began their pitch at just how they can solve the disaster recovery complexities inside organizations today.  Disaster Recovery is one of the biggest use-cases for cloud today – lately we have seen VMware make an entrance into the space, Veeam has given their partners the option to become a cloud service provider for their solutions, most disaster recovery solutions provide some sort of cloud-like integration. So how does OneCloud plan to enter into this market and what exactly can they bring to separate themselves and make them unique?  Their stance – Hey, DR is complex, cloud is challenging – OneCloud can abstract away the complexities of both, turning the public cloud into a secure extension of your data center.  In it’s basics, their first product OneCloud Recovery ties together your on premises VMware environment with Amazon AWS, and then replicates data as it changes, essentially duplicating your environment in Amazon.

Disclaimer: As a Virtualization Field Day 5 delegate all of my flight, travel, accommodations, eats, and drinks are paid for. However I do not receive any compensation nor am I required to write anything in regards to the event or the sponsors. This is done at my own discretion.

So with that in mind let’s take a deeper look at just how OneCloud Recovery works…

First up is planning – as with any project, planning is a very important phase and OneCloud does not take this lightly.  Their solution relies upon a lightweight tool they call OneCloud Software Insight.  Insight is a tool that installs within your on premises VMware environment – from their it discovers and maps out your infrastructure, including your VMs, storage and network.  It then determines what VMs are eligible for their DR solution, allows you to change policies applied, assigning greater or lesser RPO’s on a per-VM basis and uses that data to estimate costs within Amazon EC2.   Another feature of Insight I found interesting is it also takes into account the amount of bandwidth required compared to the amount of bandwidth available in order to get your data into the cloud while still maintaining the SLA’s and RPO’s you have setup within the application – this is something that I’ve not seen in a lot of other assessment/cloud readiness type tools.  At the end you are left with a very detailed cost breakdown of what your environment might cost to run in Amazon using OneCloud Recovery, taking into account all of the individual Amazon costs (EC2, S3, Glacier, etc) as well as the licensing costs of OneCloud Recovery.  Honestly this tool by itself provides some unique value and data to a customer, even if they don’t plan to leverage OneCloud Recovery.

On to the magic..

With planning out of the way, purchase orders issued,  and the trigger pulled OneCloud Recovery the magic can now begin – but it’s not magic, it’s tech – and this is how it all works…

First up is installation –  OneCloud Recovery is installed by simply importing an ova into your existing vSphere environment and providing two credentials; your vSphere and AWS credentials/keys. So far so good eh?  From there automation kicks in, your environment is discovered, mapped out and blueprinted and what they call the bootstrap process begins.  This process first creates a virtual private cloud within Amazon, and then deploys a couple of  appliances to both your on premises and Amazon environment.  Firstly the management appliance is deployed which essentially allows you to manage your OneCloud Recovery environment, which can be done from either on premises or within Amazon AWS due to its configuration and data being replicated to both sites.  Secondly we have worker appliances which are deployed at both ends – these are the heavy lifters of the solution which do all the moving and transfer of data, fail-over/fail-back, etc..  At then end of roughly 10 minutes or so, your complete virtual private cloud is built within Amazon and completely bridged to your on premises environment over a secure VPN tunnel that was established between the two sites – all automated!

Next we create our protection groups.  A protection group is essentially a policy that defines certain SLA’s/RPO’s etc.  Think of it in terms of a job that runs every 1 hour, every 2 hours, every 4 hours, etc.  Then we simply drag and drop our VMs into their designated protection groups depending on the RPO we wish to assign to the particular application.  And as you can see, the UI associated with OneCloud Recovery is very clean and looks like a joy to work with.OneCloudUI

With our VMs now assigned to a protection group the technology kicks in to high gear.  The protection process begins with our OneCloud Recovery worker first snapshotting the VM in order to free up it’s underlying disks.  It then converts the data into their own format which is highly compressed and stores it locally on some tier 2/3 storage on-site, then finally replicating the data to AWS, obviously performing a full seed first and subsequently leveraging vSphere’s Change Block Tracking from that point forward.

As far as fail-over options like many others OneCloud Recovery provides both test and live options to fail-over.  They do however offer a bit of uniqueness in this process.  We all know that compute in the cloud costs money right?  Imagine if we had an 8 vCPU, 32GB RAM production VM replicated into AWS – do you think we really need all that compute and memory during a fail-over test?  Probably not! OneCloud Recovery recognizes this and allows you power on your VMs inside of AWS in an undersized fashion!  I mean, that 8CPU/32GB RAM VM may function fine with 2vCPU/16GB of RAM with no load during a test, thus saving your money!

When it’s time to fail-over for real OneCloud Recovery also has some unique features!  While most disaster recovery solutions halt their protection during a fail-over action, OneCloud Recovery actually continues to provide you with options to protect your VMs, even while they are running in AWS.  They do so by leveraging Amazon EBS snapshots and protecting the data within your AWS region.

As far as fail-back is concerned OneCloud Recovery has some unique features as well.  When a primary data center comes back on line, OneCloud Recovery is able to determine if at all any existing data is still intact.  If it is, OneCloud Recovery can perform it’s fail-back in a delta fashion, moving only those changes that were made in AWS during the fail-over back to your primary data center, rather than having to extract entire VMs out of AWS which could be both costly and more importantly time consuming.

More to come from OneCloud?

In terms of OneCloud Recovery we are looking at a 1.0 product meaning there is a lot of development left to be done on the solution.  My personal requests – I’d love to see the software get a little more aggressive and customized in terms of it’s RPO’s.  Currently RPO’s are set in stone at 1,2,4,8,12,24 hours.  Aggressively I’d like to see some smaller numbers here – 1 hour of lost data could equate to quite a lot of money to a lot of enterprises.  Also I’d love to see some integration into more clouds other than just Amazon – think Google for example as there could potentially be some cost savings for customers there.

Looking at OneCloud Software as a company however I don’t think that OneCloud Recovery is going to be their only product.  They have a solid core technology that maps, discovers, and blueprints our on premises environments, then duplicates that to Amazon.  I don’t see this only being used for Disaster Recovery.  I had a chat with Marc Crespi, CEO and Co-Founder from OneCloud and brushed this past him – in which he didn’t confirm or deny any of it – but if I had to guess I can definitely see them exploring other areas in the future – think DevOps, Migrations, Hybrid Cloud, etc.  All areas that OneCloud Software’s core blueprint technology may be a good fit for.  But, this is all speculation on my part – but still watch these guys – they are on to something…

If you are looking for more info around OneCloud Recovery I definitely recommend checking out some of the other great community posts resulting from VFD5

OneCloud mentioned that they waited to go to market with OneCloud Recovery until they have met three internal requirements – it must be simple, it must be cost optimized and it must be a complete solution.  Honestly I think they have accomplished those goals with OneCloud Recovery.  The automation that has been built into the product, coupled with a very clean UI takes the cake in terms of simplicity – less knobs to turn = more simplicity!  Cost Optimization, well, the insight tool definitely gives you a great understanding of all the costs involved in using OneCloud Recovery, even taking into account that they can down-size VMs when running in Amazon.  And as for a complete solution they have for sure achieved what they set out to do – establish public cloud as an extension of your data center for disaster recovery purposes.

PernixData – transforming to a new arc…

arc1Nearing then end of their Virtualization Field Day 5 presentation, Satyam Vaghani, CTO and co-founder of PernixData touched on something dubbed “The arc of company life” – basically explaining how a companies journey involves an arc, moving from startup, through growth, till finally peaking and then beginning a decline.  Certainly all of these stages sound ideal for a company, minus the decline phase – but how does a company avoid this decline?

Transformation fuels growth

Satyam described this avoidance as transformation  By creating new arc, spawning off the original arc, companies can transform and endure the dreaded decline phase – essentially performing a pivot and focusing on something new, something useful, something different.


There has been a lot of talk in regards to disruption in the last 10 or so years – VMware has came and disrupted our data centers – which is a good thing, change is good – I think we can all agree on that – but Satyam says that in order for PernixData to cause disruption in our data centers they must first focus on technologies and solutions that will disrupt PernixData as a company – force themselves to look beyond server-side cache, beyond providing just performance to our VMs, force themselves to innovate and at the same time to transform and begin their journey on a new arc…

Disclaimer: As a Virtualization Field Day 5 delegate all of my flight, travel, accommodations, eats, and drinks are paid for. However I do not receive any compensation nor am I required to write anything in regards to the event or the sponsors. This is done at my own discretion.


A new hill to climb

So what is this new innovation, this pivot if you will that PernixData has decided to take?   Satyam took us on a journey explaining just how PernixData is trying to shift and gave us a few examples of some new products and solutions that PernixData is now focusing on, and how some enhancements to their core solution FVP is fuelling their transformation to grow…

PernixData Architect

Let’s start with PernixData Architect –  a new monitoring solution for your data center which as you can probably imagine focuses heavily on storage.  The interface for PernixData Architect looks smooth, implementing an SSO solution between FVP and other soon to be released applications from PernixData (announced below).  PernixData Architect goes deeper than simply reporting on those famous virtualization storage parameters such as DAVG, KAVG, and QAVG – think more in terms of statistics that we don’t see that often, but certainly do matter.  Sitting on top of the kernel PernixData Architect does indeed report on latency and IOP statistics, however it also presents us with metrics around commonly used block sizes, read/write ratios, active working sets, etc..  What can we do with all this data?  Well, aside from tuning and sizing our storage infrastructure it’s the perfect data for configuring and setting up FVP (Imagine that).


PernixData Cloud

So, PernixData is gathering all this data as it pertains to storage and giving us visibility into our own data centers but what if we wanted to take that data and compare it to others?  PernixData Cloud gives us the ability to do just that.  I like to think of PernixData Cloud as a graphing representation for the hallway talks that take place at conferences.  We’ve all had those experiences where we are standing around at a conference asking our peers questions like “What’s your max IOPs?”  “What type of latency are you seeing?”  “Oh, you’re running an EMC VNX, so am I, what type of maximum throughput do you see?”  PernixData takes these type of questions and places them (and the answers) into a familiar interface to that of Architect and FVP.  Imagine being able to see what the most common type of SSD drive being utilized is?  Or what type of storage array the industry is using behind FVP?  All the time comparing your environment to the average of similar environments in terms of latency, IOPs, etc.  PernixData Cloud Insights can certainly be used to help influence purchasing decisions, drive adoption, and gage how you stack up against the rest of the world.  Again, taking those hallway conversations and bringing them into your data center, with a more accurate and exact answer (basically, no lies Smile)


That said while data center comparison solutions such as PernixData Cloud Insights are extremely helpful, they aren’t much good until they have a vast, wide-range of data from all verticals – and getting this data can sometimes be a major challenge.  I mean, you can ask customers for data all day long but unless something is in it for them, you will usually come up short.

Enter PernixData FVP Freedom

fvp-freedom-gPernixData FVP Freedom solves the “What’s in it for me?” problem when providing data for Cloud Insights, but before we get into how it does that let’s take a quick look at a few of the benefits of FVP.   If you are looking for a great overview of PernixData FVP I’d definitely check out Peter Chang’s blog around FVP 2.0   For those looking for a quick description of what FVP is we can look no further than Satyam’s words; “Making previously impossible storage systems possible”.  They do this by accelerating read and write activities to your storage array by caching to either SSD storage or RAM locally on each host.  The cache to memory feature is interesting to say the least – with more and more vendors shipping servers with un-ghastly amounts of RAM it only  makes sense to take advantage of this and utilize it to it’s fullest potential.  FVP allows you to do by utilizing their Distributed Fault Tolerant Memory (DFTM) solution.  By aggregating RAM resources from all your hosts, and synchronously replicating to at the very least two hosts within your cluster you are left with the performance speeds of memory in a volatile manner, ensuring you get both efficiency and availability for all of your accelerated workloads.

Now let’s get back to the data seed issues within PernixData Cloud – If you haven’t already guessed by the newly announced product name (FVP Freedom), PernixData is creating a new version of their FVP software for the low low price of free!  Now there will be a few caveats with the release – technology limitations allow you to only use 1 DFTM cluster which is limited to 128GB of write-through cache only, but in essence, you can accelerate unlimited applications,VMs and hosts on what you have to work with.  Outside of the technology there is one more catch, any PernixData Freedom implementation will upload statistics into PernixData Cloud – this is how they solve the issues around seeding the PernixData Cloud environment.  A trade for your environmental data – which will of course be aggregated and anonymized.

Movin’ on up!

All in all I like the announcements coming from PernixData out of VFD5!  And I’m not alone with this feeling – there are a number of other posts from some great bloggers listed below, definitely check them out.

Offering up FVP in a freemium type model can only help adoption – getting the application into more data centers and in more potential customers hands is a good thing.  And it’s not like you are that limited with 128GB of memory either – the example PernixData gave us during the presentation revolved around LoginVSI testing.  They went from 181 VSImax users to 328 after implementing a 128GB DFTM cluster across 2 hosts – basically cutting their VDI per desktop storage costs in half.    Needless to say this is a pretty big benefit for budget constrained companies that might not mind sharing their environment data with PernixData Cloud users in exchange for free software.  If your interested you can sign up for FVP Freedom here.  The other new applications (Architect and Cloud Insights) were a surprise to me – it’s not like they are doing anything new here, there is tons of monitoring software out there, but they are reporting on and using the data differently than most the others.  Sometimes having a unique spin on the data that you report on is enough to raise eyebrows – and I have no doubt that PernixData is on to something here, if not a solid base for something more to come!  However it plays out, they certainly took another path on the arc and are on the up and up yet again!


#VFD5 Preview – VMTurbo

VMTurboLogoSmOnce again it looks like I’m going to have to get on a plane and travel to the great US of A in order to see my fellow Toronto VMUG Co-leader Eric Wright, who lives within a couple hours of where I’m sitting right now!  But that’s ok, because Eric will be bringing with him the VMTurbo Virtualization Field Day 5 presentation in Boston!  For those that know or have heard Eric speak you will know what I mean – he certainly has a way of keeping the audience interested and getting his point across – a couple great qualities to have when speaking…

Disclaimer: As a Virtualization Field Day 5 delegate all of my flight, travel, accommodations, eats, and drinks are paid for. However I do not receive any compensation nor am I required to write anything in regards to the event or the sponsors. This is done at my own discretion.

Anyways, It feels like we just got done having a look at VMTurbo during VFD4 in Austin and here they are right back in front of us at VFD5 in Boston.  And a lot has changed since January with both the company and their flagship product Operations Manager – They’ve kicked their TurboFest User Groups into high gear, hosting meetings San Fran, London and Atlanta, they were named one of the best places to work by the Boston Business Journal, and Operations Manager 5.2 was released and with that came features such as QoS Adherance, more support at the application level in terms of MS SQL and Oracle, integration with Arista Networks to help make more “network aware decisions” and of course, now offering the complete package delivered thought a SaaS offering in Amazon AWS. So, yeah, they’ve been busy!

An economic look at your data center

If you haven’t had a look at Operations Manager you probably should.  VMTurbo takes a unique approach as it pertains to monitoring and tuning your environment to ensure you get to what they like to call “Data Center Nirvana”.  Essentially they take an economic model and apply it to your infrastructure – turning your data center into a supply chain.  By treating your resources, things like CPU, memory, disk, etc as suppliers and your VMs as consumers, VMTurbo is able to apply economic formulas to your infrastructure, increasing cost of resources when supply is sparse, and decreasing when it is bountiful.  By doing so Operations Manager is able to determine that while migrating a VM may make sense at eye level, costs may be too high on the other host, thus making recommendations to leave it be.  It’s an interesting way of looking at things and makes a lot of sense to me…

Now there is certainly a lot more to what Operations Manager does and I encourage you all to tune into VFD5 to learn all of it.  You can do so by heading over to the VFD5 page and watching the live stream, as well as keep up to date with all my content here.  VMTurbo is a fast growing company with a unique idea so I’m sure they will have something mind-blowing for us come next Wednesday when they kick off all that is VFD5!

#VFD5 Preview–Ravello Systems

v2Ravello_Logo_large-wpcf_100x27Ravello Systems have certainly had there fair share of buzz lately and rightly so – the shear fact that you can run a 64 bit VM, on top of a nested ESXi host, on top of their hypervisor (HVX), on either Amazon or Google Cloud is to say at the least – the bomb!

Disclaimer: As a Virtualization Field Day 5 delegate all of my flight, travel, accommodations, eats, and drinks are paid for. However I do not receive any compensation nor am I required to write anything in regards to the event or the sponsors. This is done at my own discretion.

I’ve had the chance to work with Ravello during their nested ESXi beta along with a few other bloggers and was blown away by the performance they provided while doing the exact scenario as described above.  I did a few posts on Ravello, one which involved a vMotion from Amazon AWS to Google Cloud if you’d like to check it out!  Needless to say I’m excited to see Ravello IRL at VFD5 on June 26 in Boston.  Also, I’ve heard through the grapevine that long-time Toronto VMUG attendee and friend Kyle Bassett will be part of the presentation – Kyle is a brilliant mind so you won’t want to miss it!

A home lab replacement?

In a lot of ways I can get the performance that I need in order to replace my home lab!  That said I’m no where near as extravegant when it comes to homelabs as a lot of people in these communities.  When it comes down to it though, a lot that I do within the lab is configuration validation, testing different setups, etc.  All of this is easily accomplished in Ravello!  In fact in some ways I can do a lot more within Ravello then I can within my own home lab.  Stringing together two datacenters, one in Google, one in Amazon via VXLAN for example!  For the most part I’m finding myself working more in cloud platforms than in my basement any more.

Bells and whistles

I would be selling Ravello short if I just said they allowed you to run nested ESXi in Amazon – they have a lot of value add, bells and whistles so to speak that make the service what it is.

Firstly they have what’s called an application – an application is essentially one or more VMs that perform some sort of function.  You could think of a couple ESXi hosts, a vCenter Server and some sort of iSCSI storage appliance as an application.  Applications and be started and stopped as whole unit, rather than each individual VM.

Secondly they have blueprints.  We can think of a blueprint as a point in time snapshot of any application.  Basically, blueprints allow you to save a configuration of an application to your library, which you can then deploy to either another application or another cloud.  Think of a blueprint being a base install of your ESXi/vCenter setup – you know before you go mangling inside of it.  If your original application ever breaks, or you’d like to explore new features without affecting your current setup you could simply save your application as a blueprint and deploy a new instance of it.  One newly released feature is the Ravello Repo, which allows customers to essentially share their blueprints with others, saving a lot of time when it comes to building up test and use cases.

Thirdly is pricing!  Honestly I’m not sure what hard costs I’ve incurred as I have gotten 1000 CPU hours/month for free – If you are a vExpert you can too as well as they have just extended this offer to all vExperts – very generous!  Not a vExpert, no problem, you can still get a free fully functioning trial here, good for 14 days worth of all you can eat cloud.  Although I’ve never seen my pricing I have looked at their pricing calculator – selecting 12 vCPU’s, 20GB of RAM and a TB of storage it comes out to around $1.32/hour – which too me is more than enough resources to get a small lab up and running and is more than affordable for what you get.  Plus you don’t deal with Amazon or Google at all – Ravello takes care of all of that.

What Ravello has in store for us on June 26 we don’t know, but I can assure you that it will be a treat to watch.  Speaking of watching, if you want to follow along with all the action you can do so by watching the live stream on the Tech Field Day page or on my VFD5 event page where all my content will live.

#VFD5 Preview – Scale Computing

Scale_Logo_High_Res-wpcf_100x38Virtualization Field Day 5 in Boston will be Scale Computing’s fifth appearance at a Tech Field Day event dating all the way back to VMworld 2012 when they launched their hyperconvergence solution, HC3.  Thinking about this is kind of funny really – picture the Scale Computing booth on the VMworld show floor – at the time they were a scale-out storage company, however they were launching their KVM based hyperconvergence solution which really has nothing to do with VMware at all!  One word – ballsy! Smile

Either way since then Scale has been promoting the HC3 which targets the SMB market, and they have been doing a great job of it as I’ve seen them at nearly every event I’ve been too, big or small.

Disclaimer: As a Virtualization Field Day 5 delegate all of my flight, travel, accommodations, eats, and drinks are paid for. However I do not receive any compensation nor am I required to write anything in regards to the event or the sponsors. This is done at my own discretion.

So what is it?

387622We all know what hyperconvergence is right?  It’s just so hot right now!  Scale computing, just as the Nutanix’s and Simplivity’s of the world has combined the compute, network, and storage into one box, allowing businesses to gain performance and agility by implementing their building block type architecture.  Scale currently ships three different models of their HC3, differing in capacity and memory…



And the uniqueness?

In order to succeed in any market you really need to have something which sets you apart from the “other guys”.  Something which makes your offering so compelling that you just have to have it!  What’s Scales?  I would most definitely say their niche is really knowing their target market, which in turn puts the HC3 at a very compelling price.

Scale has never once deviated from the market they say they serve.  They bring a hyperconverged, scale-able platform to the SMB.  But price isn’t the only thing that helps them succeed in the SMB space..  They have really evaluated everything from their interface, to ease of use, to the options that they expose within their management software.  Basically, Scale provides the SMB with a solution to create and run VMs – no more, no less.  When I watched Scale at VFD4 I often found myself asking questions like, “So is this it?  You just click create VM and you are done?  Where’s all the options?”.  The answers I got were “Yes, you are done, there are no other options.”  It’s simply just a solution for the SMB admin, who probably has little to no time to mess around with anything or learn anything new – it lets them get in, create a VM, and get out.

Now I’d be selling them a little short if I didn’t say that there were other options – they have the ability to take snapshots, to clone VMs, to setup replication between another Scale cluster.  All of these implemented in the same, easy to use, very little setup kind of way as everything else.  They also have all the “enterprisey” features as well – things like HA, Live Migration, Thin Provisioning etc – however they are all enabled by default and require no setup at all.

I’m very excited to see what Scale will be talking about at VFD5.  Their presentation was honestly one of my favorites at VFD5 ( and that’s not just the shot of bourbon talking).  I’m interested to see if they have stayed true with their “SMB” focus if talking about any future releases – I believe that Scale really knowing their target market plays a big part in the successes that they have been having.  If you want to follow along be sure to watch the live stream over at the VFD5 page or I should have it up and running, along with all of my VFD5 related content on this page as well.  I can say that their CTO, Jason Collier is a great speaker and it will be an entertaining 2 hours to say the least!

#VFD5 Preview–PernixData

PernixData_Logo_ColorI’ve had the pleasure of seeing PernixData a number of times both at our local Toronto VMUGs as well as at VMworld.  Also, I have a couple close friends working for Pernix so I’m very familiar with what the solutions they currently offer.  One interesting thing about Pernix is that they have a bit of a history of releasing new features and enhancements at Tech Field Day events (See their Storage Field Day 5 presentations) so I’m definitely looking forward to seeing them on June 24th in Boston.

Disclaimer: As a Virtualization Field Day 5 delegate all of my flight, travel, accommodations, eats, and drinks are paid for. However I do not receive any compensation nor am I required to write anything in regards to the event or the sponsors. This is done at my own discretion.

So what do they do?

PernixData in its simplest form is a server side cache play. Their software, FVP,  essentially allows you to accelerate both reads and writes utilizing server components, both RAM and SSD drives.  Basically they sit in the middle of your data path between your hypervisor sending the I/O and your storage array which receives the I/O.  What this does is allow your server components to essentially act as a cache for your storage array – and since they sit right next to all of your compute you can imagine the benefits in terms of efficiency and performance FVP provides.

FVP recognizes that the first thing that comes to mind in looking at all of this is that the cache, the SSD and RAM is not shared storage – so what happens when a host decides to take a walk and brings all of that non-committed write cache with it?.  Because of situations just like this Pernix basically replicates any writes across all nodes (or the nodes you chose) in your FVP cluster before acknowledging the write back to the VM – allowing for host failure scenarios and ensuring that your writes are safely written back to your storage array.  All this while still supporting advanced vSphere features such as HA and DRS.

So is server-side cache a band-aid?

I’ve heard this term a lot in the industry – stating server side caching is just a band-aid for the real problem – your underlying storage.  But when I hear this I ask myself – if Pernix and other companies can deliver me a solution that drives enough IOPs and enough performance to successfully and efficiently run my environment do I really care if my underlying storage isn’t doing that on its own?  Honestly if no one is complaining and everything is running up to my expectations I feel like it’s a win-win – not a band-aid.

Pernix definitely has some awesome innovation in their software – FVP covers all angles when it comes to providing that fault tolerant, mirrored, read and write cache for your host.  You can enable caching on a per datastore or per VM level – allowing you to accelerate only your most crucial or needed workloads – also, FVP now supports not just block storage, but NFS as well!  I have no idea what Pernix has in store for us at VFD5 but you can bet it will be pretty awesome!  Once again, you can tune into all the action by watching the live stream on the VFD5 event page – as well, all my content and the live stream will also be on my VFD5 page.

#VFD5 Preview – NexGen

logo1Alright here’s another company presenting at VFD5 in Boston that I recognize, but know very little about!  Thankfully the Stanley Cup playoffs are done and I now have a little extra room in my brain to take in all the info that will be thrown at us.  Anyways I started to do a little digging on NexGen and oh boy, what a story do they have!  Stephen Foskett has a great article on his blog in regards to the journey NexGen has taken – it’s pretty crazy!  Certainly read Stephens article but I’ll try to summarize the craziness as best I can…

Disclaimer: As a Virtualization Field Day 5 delegate all of my flight, travel, accommodations, eats, and drinks are paid for. However I do not receive any compensation nor am I required to write anything in regards to the event or the sponsors. This is done at my own discretion.

Basically, a couple of the LeftHand founders got together and founded NexGen – ok, this story doesn’t seem all that crazy so far.  Well, after a few years Fusion-io came in with their wallets open and aquired NexGen – again, not a real crazy spin on a story!  Moving on, we all know that SanDisk walked in and acquired Fusion-io with that getting NexGen.   Then, the next thing you know SanDisk spun out NexGen on their own, putting them right back to where they started!  This just all seems wild to me!

So where do they stand today!

NexGen is a storage company, a storage company offering a hybrid flash array with software that helps their customers align their business practices with their storage by prioritizing the data they store. So what does that really mean?  Basically it comes down to QOS and service levels.  NexGen customers can use these two concepts in ways that they can define performance, availablity, and protection of their data by defining the IOPs, throughput and latency that they need for each and every application.  Depending on the service levels assigned to a workload, NexGen can borrow IOPs from a lower tiered service in order to meet the QOS defined on a business critical application.

Another unique feature of NexGen Storage is in the way they use flash and SSD.  Most arrays will place their flash behind some sort of a RAID controller, whereas NexGen utilizes the PCIe bus to access their flash, providing a redundant, high-speed, low latency caching mechanism for both reads and writes.

There is certainly a lot more bells and whistles within the NexGen arrays and a lot bigger of a story to be told here.  The way NexGen is utilizing flash within the array is definitely peaking my interest, but honestly, I’m interested more in the story of the company and how all those acquisitions and spin-offs have helped them.  I’m sure they will address both of them at VFD5 and believe me there will be more posts around NexGen and their offerings.  If you want to follow along during the VFD5 presentations you can see them live both on the official VFD5 event page, as well as my VFD5 event page where all my content will be posted.

#VFD5 Preview – Rubrik

logo-large-gray-wpcf_100x48There has been much a buzz about Rubrik over the last few weeks with them going GA and coming up with oh, you know, a cool 41 mil in series B funding.  Certainly if you haven’t heard of them before you can probably recognize their name now!  I for one, had not looked at their solutions at all.  I’ve heard the name, but never gave it a look!  That will change come June 25th at Virtualization Field Day 5 when Rubrik takes the stage to deep dive into what they dub “The worlds first converged data management platform”.

Disclaimer: As a Virtualization Field Day 5 delegate all of my flight, travel, accommodations, eats, and drinks are paid for. However I do not receive any compensation nor am I required to write anything in regards to the event or the sponsors. This is done at my own discretion.

So what exactly is a data management platform?

It’s most certainly a fancy name but for sure it’s much much more.  In simple terms you can think of the Rubrik appliance (Brik) as a backup appliance – a backup appliance that is architected in such a way that you can scale to thousands of nodes depending on the amount of data you are looking to protect.  Currently they offer their r330, which is 3 node appliance with 10TB of disk and a r340, a 4 node appliance with 15 TB of disk.

Wait – did you say backup?

Sure, there are a lot of players in the backup space.  We have our traditional players that have seen it all.  Companies like Symantec and EMC come to mind.  Then virtualization came along and we started to see backup solutions being purpose built for virtualization.  Veeam, Unitrends, Trilead are near the top of the list.  So with all of these companies still at play within the data center backup space do we have room for one more?  Can Rubrik differentiate themselves from the others?

So what makes Rubrik unique?

Appliance driven – With the exception of Unitrends I don’t see many backup vendors coming in the form of a full appliance.  Essentially what Rubrik has done is take the software and hardware requirements of their backup solutions and deliver it in a 2U scaleable appliance architecture.  Speaking of scale Rubrik building block architecture allows all tasks and operations to be ran on top of any node within their cluster – therefore, the more nodes you add don’t just expand capacity, but should also increase performance and availability as well.

Global File Search – This one is a big feature in my opinion.  There has been countless times where someone I support has came up to me looking for a file to be restored, but can’t remember where they saved that file.  “I just clicked it from my recent documents” they normally say.  Rubrik has a file search capability that spans across all of your VMs and actually incorporates auto complete functionality – a little like Google for your backups.

Multi-Tiered Storage – Man!  Some companies are just getting around to incorporating some kind of auto tiering in their production storage – Rubrik are doing it in your backup storage.  What this does is increase efficiency and speed.  All data sent to the Rubrik appliance enters through a flash tier – and we all know the benefits of flash.  The flash tier also provides the basis for the global file search magic as it stores all meta data on SSD as well.

Cloud Integrated – Well Amazon S3 anyways.  Users are able to chose where backups are located, whether that be on premises or inside Amazon!  A great solution for any of those backups that you are required to save for long-term and are seldom accessed!

I mentioned earlier that I don’t know a lot about Rubrik – In fact all that I know is what I’ve written in this blog post!  The buzz surrounding Rubrik has been nothing short of amazing so I’m excited to see what they have to offer and what separates them out from the already established players in the market!  On June 25th @ 10:30 we will see what Rubrik has to offer.  You too can watch the live stream on the VFD5 event page or on my VFD5 event page where all of my content and blogs about the show will be posted.