Ravello-Systems-LogoIf you have at all visited this blog in the last 4 or so months you shouldn’t be surprised to hear that I’m a pretty big Ravello Systems fan!  I was part of their beta for nested ESXi and I’ve wrote about my thoughts on that plenty of times.  With the beta out of the way and access granted to all the vExperts, Ravello Systems took hold of the clicker at VFD5 in Boston for their first of what I hope is many Tech Field Day presentations.

Disclaimer: As a Virtualization Field Day 5 delegate all of my flight, travel, accommodations, eats, and drinks are paid for. However I did not receive any compensation nor am I required to write anything in regards to the event or the sponsors. I  have also been granted early access to Ravello Systems ESXi beta in the past as well, and have received free access as a vExpert.  All that said, this is done at my own discretion.

As I mentioned earlier I’ve written plenty about what I’ve done utilizing Ravello Systems.  The platform is great for configuration validations, home lab type stuff, and for just exploring different functionality within vSphere.   You know, user type stuff.  At VFD5 Ravello went pretty deep in regards to how their software functions within Google and AWS, so I thought I’d take a different approach and try and dive a little deeper into how their technology functions this time around…to the point that my brain started to hurt.

HVX – A hypervisor that runs hypervisors, designed to run on a hypervisor – huh?!?!

Ravello’s magic sauce, HVX is custom built from the ground up to be a high performance hypervisor to run applications (and other hypervisors) while in itself running on a hypervisor (in public cloud).  To say Ravello would know a thing or two about developing a hypervisor would be a major understatement – Ravello’s co-founders, Benny Schnaider and Rami Tamir were once the co-founders of another start-up called Qumranet.  You know, the same Qumranet that originally authored this little known thing called the Kernel-based Virtual Machine, or better known as….yeah, KVM.  So needless to say that have a little experience in the hypervisor world.

The first dream within a dream

As we know Amazon’s EC2 is essentially an instance of Xen, where-as Google’s Cloud utilizes KVM.  So when we publish our application inside of Ravello we essentially deploy an instance of HVX, installed within a VM that has been spun up on either Xen or KVM – once our HVX hypervisor has been instantiated on our cloud hypervisor, then our images or VMs within Ravello are deployed on top of HVX.  So even without yet touching ESXi within Ravello we are 2 levels deep!  Now in terms of a native ESXi deployment we know that we can take advantage of common virtualization extensions such as Intel-VT and AMD SVM, however in HVX, since we have already been abstracted away from the hardware by the Cloud hypervisor we don’t have these – instead, HVX implements a technology called Binary Translation to translate any executable code from the guests that is deemed “unsafe”.  Coupling this with something called Direct Execution, which basically allows any code that need not be translated to run directly on the CPU.  Honestly, if you want to dive deeper into binary translation and direct execution Ravello has a great blog outlining it in a lot more detail than can fit into my maple syrup soiled, hockey statistic filled Canadian brain.  Aside from the performance features, HVX also passes up emulated hardware – the same hardware that we as VMware administrators are all used to – things like PVSCSI, VMXNet3, LSI, etc – this is all available to our guests running on top of HVX, even to our guests running on top of our ESXi guests on top of HVX – I know right!


So, what actually happens when we click that ‘publish’ button from within the Ravello interface is somewhat unique – we know we need to install HVX into our cloud VM but how many instances of HVX actually get deployed?  I’m not going to try and understand their algorithms around how they size their hypervisor but I’m just going to say it depends on the resource allocation on the VMs within your application.  You could end up with a single VM running on one instance of HVX or you could end up with 6 VMs running on 2 instances of HVX – however the deployment scenario plays out you can be ensured that it will in only be VMs belonging to that single application that get deployed on the HVX instances – no VMs from other peoples applications, not even any VMs from other applications that you may have.

That networking though!

Perhaps one of Ravello’s major strong points is how it exposes a complete L2 network to the applications running on top of it!  By that I mean we have access to everything L2 provides, we have services available such as VLANS, broadcasting, multicasting, etc within the overlay network Ravello implements.  As we mentioned before, depending on the size of the application being deployed, we may or may not have multiple instances of HVX instantiated within the cloud provider.  If we are limited to a single HVX instance, then the networking is “simple” in terms that it doesn’t have to leave their hypervisor – all switching, routing, etc can be performed within the one HVX instance.  However when an application spans multiple HVX instances creative technologies come into play as shown below.  Ravello has essentially built their own distributed virtual switching mechanism which can tunnel the traffic between HVX instances or Cloud VMs via UDP connectivity.


And storage…

The last challenge as it pertains to running Ravello applications inside the cloud comes in terms of storage performance.  Having HVX slotted in-between the the running applications and AWS allows Ravello to take advantages of the object storage capabilities of S3, but yet still present the underlying storage to the VMs as a block device.  Essentially, when we import a VM into Ravello Systems, it’s stored in it’s native format on top of HVX and that appears to be a block device, but under the covers the HVX file system is essentially storing this information in object storage.  Aside from all this abstraction HVX implements a Copy-on-write file system, delaying the actual allocation of storage until it is absolutely needed – in then end we are left with the ability to take very fast snapshots of our images and applications we deploy, easily duplicating environments and allowing people like myself to “frequently mess things up Smile


The Ravello presentation at VFD5 was one of my favorites from a technology standpoint – they did a great job outlining just what it is they do, how they do it, and how they are chosing to deliver their solution.  There was some questions around performance that were met head on with a whiteboard and overall it was a great couple hours.  Certainly check out some of the other great community posts below centered around Ravello to get some more nested goodness..

Ravello has a great product which honestly completely blows my mind when I try and wrap my head around it – We have our VMs, running on ESXi, running on HVX, running on Xen, running on some piece of physical hardware inside an Amazon data center – attaching to both Amazon EBS and S3 – we are snapshotting these things, saving as blueprints, redeploying to Google Clouds which completely flip the underlying storage and hypervisor!!  It’s exporting VMs out from our current vSphere environments and deploying them into the public cloud, complete with all of their underlying networking – already setup for you!  Ravello has coined their nested virtualization capabilities as Inception, and if you have ever seen the movie I’d say it certainly lives up to the name.  It has this magic about it – where you are in so deep yet still in control.  If you have a chance check out their VFD5 videos and sign up for a free trial to check them out for yourself.