Category Archives: Uncategorized

Ravello Systems – Inception without the kick!

Ravello-Systems-LogoIf you have at all visited this blog in the last 4 or so months you shouldn’t be surprised to hear that I’m a pretty big Ravello Systems fan!  I was part of their beta for nested ESXi and I’ve wrote about my thoughts on that plenty of times.  With the beta out of the way and access granted to all the vExperts, Ravello Systems took hold of the clicker at VFD5 in Boston for their first of what I hope is many Tech Field Day presentations.

Disclaimer: As a Virtualization Field Day 5 delegate all of my flight, travel, accommodations, eats, and drinks are paid for. However I did not receive any compensation nor am I required to write anything in regards to the event or the sponsors. I  have also been granted early access to Ravello Systems ESXi beta in the past as well, and have received free access as a vExpert.  All that said, this is done at my own discretion.

As I mentioned earlier I’ve written plenty about what I’ve done utilizing Ravello Systems.  The platform is great for configuration validations, home lab type stuff, and for just exploring different functionality within vSphere.   You know, user type stuff.  At VFD5 Ravello went pretty deep in regards to how their software functions within Google and AWS, so I thought I’d take a different approach and try and dive a little deeper into how their technology functions this time around…to the point that my brain started to hurt.

HVX – A hypervisor that runs hypervisors, designed to run on a hypervisor – huh?!?!

Ravello’s magic sauce, HVX is custom built from the ground up to be a high performance hypervisor to run applications (and other hypervisors) while in itself running on a hypervisor (in public cloud).  To say Ravello would know a thing or two about developing a hypervisor would be a major understatement – Ravello’s co-founders, Benny Schnaider and Rami Tamir were once the co-founders of another start-up called Qumranet.  You know, the same Qumranet that originally authored this little known thing called the Kernel-based Virtual Machine, or better known as….yeah, KVM.  So needless to say that have a little experience in the hypervisor world.

The first dream within a dream

As we know Amazon’s EC2 is essentially an instance of Xen, where-as Google’s Cloud utilizes KVM.  So when we publish our application inside of Ravello we essentially deploy an instance of HVX, installed within a VM that has been spun up on either Xen or KVM – once our HVX hypervisor has been instantiated on our cloud hypervisor, then our images or VMs within Ravello are deployed on top of HVX.  So even without yet touching ESXi within Ravello we are 2 levels deep!  Now in terms of a native ESXi deployment we know that we can take advantage of common virtualization extensions such as Intel-VT and AMD SVM, however in HVX, since we have already been abstracted away from the hardware by the Cloud hypervisor we don’t have these – instead, HVX implements a technology called Binary Translation to translate any executable code from the guests that is deemed “unsafe”.  Coupling this with something called Direct Execution, which basically allows any code that need not be translated to run directly on the CPU.  Honestly, if you want to dive deeper into binary translation and direct execution Ravello has a great blog outlining it in a lot more detail than can fit into my maple syrup soiled, hockey statistic filled Canadian brain.  Aside from the performance features, HVX also passes up emulated hardware – the same hardware that we as VMware administrators are all used to – things like PVSCSI, VMXNet3, LSI, etc – this is all available to our guests running on top of HVX, even to our guests running on top of our ESXi guests on top of HVX – I know right!


So, what actually happens when we click that ‘publish’ button from within the Ravello interface is somewhat unique – we know we need to install HVX into our cloud VM but how many instances of HVX actually get deployed?  I’m not going to try and understand their algorithms around how they size their hypervisor but I’m just going to say it depends on the resource allocation on the VMs within your application.  You could end up with a single VM running on one instance of HVX or you could end up with 6 VMs running on 2 instances of HVX – however the deployment scenario plays out you can be ensured that it will in only be VMs belonging to that single application that get deployed on the HVX instances – no VMs from other peoples applications, not even any VMs from other applications that you may have.

That networking though!

Perhaps one of Ravello’s major strong points is how it exposes a complete L2 network to the applications running on top of it!  By that I mean we have access to everything L2 provides, we have services available such as VLANS, broadcasting, multicasting, etc within the overlay network Ravello implements.  As we mentioned before, depending on the size of the application being deployed, we may or may not have multiple instances of HVX instantiated within the cloud provider.  If we are limited to a single HVX instance, then the networking is “simple” in terms that it doesn’t have to leave their hypervisor – all switching, routing, etc can be performed within the one HVX instance.  However when an application spans multiple HVX instances creative technologies come into play as shown below.  Ravello has essentially built their own distributed virtual switching mechanism which can tunnel the traffic between HVX instances or Cloud VMs via UDP connectivity.


And storage…

The last challenge as it pertains to running Ravello applications inside the cloud comes in terms of storage performance.  Having HVX slotted in-between the the running applications and AWS allows Ravello to take advantages of the object storage capabilities of S3, but yet still present the underlying storage to the VMs as a block device.  Essentially, when we import a VM into Ravello Systems, it’s stored in it’s native format on top of HVX and that appears to be a block device, but under the covers the HVX file system is essentially storing this information in object storage.  Aside from all this abstraction HVX implements a Copy-on-write file system, delaying the actual allocation of storage until it is absolutely needed – in then end we are left with the ability to take very fast snapshots of our images and applications we deploy, easily duplicating environments and allowing people like myself to “frequently mess things up Smile


The Ravello presentation at VFD5 was one of my favorites from a technology standpoint – they did a great job outlining just what it is they do, how they do it, and how they are chosing to deliver their solution.  There was some questions around performance that were met head on with a whiteboard and overall it was a great couple hours.  Certainly check out some of the other great community posts below centered around Ravello to get some more nested goodness..

Ravello has a great product which honestly completely blows my mind when I try and wrap my head around it – We have our VMs, running on ESXi, running on HVX, running on Xen, running on some piece of physical hardware inside an Amazon data center – attaching to both Amazon EBS and S3 – we are snapshotting these things, saving as blueprints, redeploying to Google Clouds which completely flip the underlying storage and hypervisor!!  It’s exporting VMs out from our current vSphere environments and deploying them into the public cloud, complete with all of their underlying networking – already setup for you!  Ravello has coined their nested virtualization capabilities as Inception, and if you have ever seen the movie I’d say it certainly lives up to the name.  It has this magic about it – where you are in so deep yet still in control.  If you have a chance check out their VFD5 videos and sign up for a free trial to check them out for yourself.

VMTurbo – allowing smart people to do smart things

VMTurboLogoSmLet’s face it, our environments now are way more complex than they were 10 years ago!  Although some tasks and components may be easier to work with and not quite as specialized, we have a lot of them – and they all need to work, in perfect harmony, together.  The problem with this is at times we get a couple members of the choir that get a little out of key – CPU starts screaming, network gets chatty and next thing you know we have an environment that’s screaming out of control, CPU start shoving network, network starts over drowning memory and to be quite honest, pretty much everyone in the choir at this point sounds like $@#!.

Although this scenario may sound a little far-fetched or a wee bit out there – I mean, CPU can’t sing we all know that!  Either way you put it any choir needs a conductor, a leader, someone who overlooks the complete environment, instructing certain members to gear down, and others to ramp up – Last month in Boston at VFD5, VMTurbo showed us just how they can wave the baton when it comes to bringing together the components of enterprise IT.

Disclaimer: As a Virtualization Field Day 5 delegate all of my flight, travel, accommodations, eats, and drinks are paid for. However I do not receive any compensation nor am I required to write anything in regards to the event or the sponsors. This is done at my own discretion.

Having just saw VMTurbo at #VFD4 in Austin only 6 months prior I was skeptical as to what they would have to talk about in Boston, thinking it was mainly going to be the same presentation – I was wrong!  They could of easily filled another 4 hours talking about the new features that they have embedded into the new release of their flagship product, Operations Manager 5.2.


Traditionally VMTurbo has gathered its’ market intelligence by polling and retrieving statistics and metrics from the hypervisor – while this is a good technique and is used by many monitoring solutions today there are some applications which don’t necessarily work well in this scenario.  Applications which look after their own resource usage – think SQL, Java Heaps, etc. – these applications may not quite properly reflect their true usages in the hypervisor layer.  For this reason VMTurbo has released an Application Control Module (ACM), which completely integrates into their entire supply/demand model of monitoring.  To help put it into perspective let’s have a look at SQL – ACM essentially brings in statistics around transactions, response time, database memory usage, etc – All items which are not available within the hypervisor itself.


From here, VMTurbo users are able to define QoS policies, or SLA’s around their applications performance.  Think I need x number of milliseconds for response time.  VMTurbo then looks holistically at your environment – it knows about the infrastructure underneath the app, what resources are available – it now knows about how that application is configured, memory management etc.  With all of this knowledge VMTurbo can then configure your environment and your application to a desired state, one that we know we are running efficiently, while meeting those SLA’s and QoS policies we have setup in regards to the application!


imagesAside from applications VMTurbo has been busy with a few other cool benefits as well!  With the adoption of public and hybrid cloud on the rise, they’ve seen a need to introduce a lot of enhancements in terms of networking – for example, knowing the physical location of applications is key in terms of best placing “chatty” applications close to each other in order to reduce latency, while still maintaining their “desired state” in terms of both CPU, Memory and Storage as well.  They do this by grouping chatty applications together in what they call a vPOD.  From there OM implements Netflow in order to discover your physical switching configuration, and can work to ensure that vPODs are grouped together on the same top of rack switch or the same public cloud region, etc, moving the entire vPOD if one application requires more resources.


Just as VMTurbo has made steps to get more information out of the application stack they are doing the same with storage!  By completely understanding the storage array underneath your infrastructure, OM is able take action on overcoming storage issues around capacity and performance.  Think of things such as knowing whether to expand a current volume or deploy a new one!  OM understands almost everything there is to know about your infrastructure and applications, and can therefor make the best decision on how to meet the SLA’s defined on those applications from a storage standpoint – one time it may make sense to simply grow a volume, while other times due to other applications running on that same volume it may be more efficient to create a new volume and migrate the application in question.

VMTurbo has certainly taken a unique play on monitoring and resolving issues with your environment.  This whole economic market play – with supply/demand being applied to your infrastructure and applications, is different, but honestly makes sense when looking at resource utilization.  I like how Operations Manager has been built – this modular approach allows them to come out with new features such as the application and storage modules, and simply plug them into the product, where they just simply inherited into the supply chain module and analytics can immediately be applied to them.  And as of now you can do it all from your own cloud on AWS!

If you want to watch the VMTurbo videos yourself you can do so here – or check out my complete VFD5 page here.  Also, we have had some other great community posts around what VMTurbo spoke about – be sure to check out each of them below as each delegate seemed to write about a different part of their presentation…

Operations Manager can certainly do some amazing things, allowing you to automate things such as moving an application to the cloud based on it’s supply/demand analytics – which at first sounds a bit scary – but hey, it wasn’t that long ago that people were weary of enabling DRS right?!?

As of today I shall be referred to as Vanguard!


I’m super excited today to finally announce that I’ve been selected to be a part of the newly announced Veeam Vanguard program!  Now I’ve been a vExpert a handful of times now, however was never involved during the early years of the program – so to be part of a community recognition program like the Vanguard right from the get go is a pretty cool honour and something I’m really looking forward to.  As for what the Vanguard program is and how it works, I’ll leave you to the Rickatron’s article published earlier today!

Veeam has always been a company that’s been near and dear to me – they were really one of the reason’s I started blogging here in the first place – to help share some of the experiences I had while using their software!  They were my first sponsor of this blog and have been around the whole time I’ve been engaged with the virtualization community!

Congrats to the other original nominees and the more that there is to come!  It’s really cool to see all of the countries represented on the Vanguard page – it truly is a global program!  I excited to see how this program grows and how it all pans out!  Also being able to call myself a Vanguard is pretty cool, in a midevil warfare kind of way!

Chrome and the endless Client Integration plug-in install prompt!

I think we can all agree that ol’ Al Einstein was a pretty smart dude right?  Especially when it came to his thoughts around insanity and the “doing the same thing over and over and expecting different results” spiel.  The thing is Al’s not here anymore and while yeah, he was a wise one, he never had to install the vSphere Client Integration plug-in inside of the newest version of Google Chrome – if he did, he’d certainly place himself inside of his own definition!


So here’s the issue at hand – One fine July morning I find myself sitting in front of the same message shown above!  “How odd…” I thought, “I could’ve swore I’ve already installed this!”  Either way it’s not a big deal, I’ll just install it again!  So, shutdown the 84 tabs inside of Chrome that I have open and away we go – next, next, next my way through the wizard, re-open my 84 tabs and…


Huh?!?  Now I know that I’ve installed this!  Maybe it’s just not enabled!  So I check out plugins section within Chrome to see that…


Wow!  It’s not even listed anymore!  Now I know it used to be there as I can remember going in and ensuring that it was always allowed to run.  After another failed attempt at installing it with the same result I do what I always do – Google!  And to my surprise end up at this KB.

To the point!

To skip my little story and get to the point the problem resides with NPAPI – an architecture used to extend browser functionality.  Google says the 90’s era architecture is outdated and causes crashes and therefore, as of Google Chrome 42 – not enabled!  That same 90’s era architecture is the delivery model within the vSphere Client Integration plug-in – so you can see where the problem now lies!

To make a long problem short all we need to do in order to get the plug-in to run is simply enable NPAPI within Chrome, which is done by entering “chrome://flags/#enable-npapi” in the address bar and simply clicking ‘Enable’.


This will get us through….for now!  Who knows when Google may decide to phase support for this out completely?  They are on their own schedule and it wouldn’t surprise me to see them just drop support sometime in the near future!  I mean, it does stand for the Netscape Plugin Application Programming Interface, which to me sounds just a little archaic :)


In the meantime we will have to put up with a nifty little warning every time we fire up the vSphere Web Client – because, who doesn’t want to close two warnings every morning?!?!  Either way it’s working now, I tried something different and got different results!  My man Al would be proud!


Rubrik brought Apple simplicity and Google scale together, you’ll never guess what happens next…

logo-large-gray-wpcf_100x48Finally I have figured out a way to incorporate techniques from the vast amounts of click-bait on my Facebook into this blog!  Did it work?

In all seriousness though Rubrik, who recently presented at Virtualization Field Day 5 in Boston have brought together both the simplicity of Apple and the scale of Google and what happened next was a scalable, converged backup and recovery solution containing both hardware and software.

Disclaimer: As a Virtualization Field Day 5 delegate all of my flight, travel, accommodations, eats, and drinks are paid for. However I do not receive any compensation nor am I required to write anything in regards to the event or the sponsors. This is done at my own discretion.

Now, all the time I hear people moan about backup solutions, shrug them off and say they are boring – I for one am not one of those people – I find the backup and recovery space very interesting and honestly there has been a ton of innovation in this space over the last 10 years that has made backup solutions more than just a simply copy of your data – it’s been quite fun to watch…but backup and recovery has a lot of players in the game right now so what really can Rubrik bring to the plate?

Rubrik – Time machine for Cloud Infrastructure

The biggest and most prominent difference that sets Rubrik apart is that they are one of really only a few vendors that ship their backup solution as a physical appliance.  While most backup vendors ship a software solution and leave it up to the customer to decide what to use for their backup storage, Rubrik is scale-out solution that comes with both the software and hardware.  There first shipping product is the R300, supporting roughly 200 VMs with around 100TB of storage available.


With control of the storage as well as the software Rubrik is able to leverage it’s various storage tiers in order to make both the backup and recovery process more efficient.  Take Instant VM Recovery for example – although this is not a new feature, we’ve seen a lot of backup solutions integrate the ability to simply power-on their VMs directly from a backup file, Rubirk can go once step further by granting flash resources to the instantly restored VM as they seem fit – essentially turning your backup appliance into a tier 2 storage array, instantly restoring a new copy of your VM – allowing you to run the VM on the Rubrik appliance with full storage resources until you see a fitting time to migrate it back to your production array.  During the backup process Rubrik is also able to ingest the initial bits into their flash tier, allowing a much quicker response back to vCenter and keeping the amount of time snapshots are open very minimal.  From there data is deduplicated and compressed inline before being written down to another tier of disks or even into the cloud

Wait did you say cloud?

Another nice feature that Rubrik has built-in is Amazon S3 integration.  Essentially when you set policies on a per VM basis you not only specify the number of restore points you want on disk, but you can also state that data which is x number of years old be maintained inside of Amazon.  Thus providing customers with somewhat of an archiving solution, and saving money on needing to purchase additional local storage for these purposes.  When it comes time to restoring data from Amazon, Rubrik has technology built-in to simply pull individual files down from Amazon, without having to pull down the entire VM or vmdk– certainly saving an organization a bit of money in transport costs if you only need to get at individual files.

If you are looking for more delegate and community opinions in regards to Rubrik and the technology they are creating I’d recommend the following…

Rubrik definitely has some awesome technology built into their 1.0 product which is heavily focused around their metadata foundation.  Having control over both the hardware and software has certainly gave them many advantages; fast search, quick recovery and more control over how the backup data gets laid out on their tiered storage platform.  They do have some areas where they are behind, things like object level recovery such as emails, active directory objects, databases, etc.  but to me the real key on the success of Rubrik will hinge around pricing – they told us it’s not going to break the bank – but honestly, a lot of the time backup solutions get thrown down to a second tier budget line and are not in the forefront of strategic IT decisions – so having to drop a large sum of money on insurance policy may not be in the cards for some IT shops.  That said, it’s 1.0 and they have done a lot in the last year and a bit they have been in existence – including adding some really smart and well known people to their staff.  Rubrik is certainly a company to keep an eye on in the coming year as they have built a solid, efficient foundation on which they layered their backup solution – So far they have “taken the backup out of recovery” and I can’t wait to see what they do next…

You can see more of my Virtualization Field Day 5 coverage here, as well as all of the recorded streams here.

Removing disaster recovery from your business with OneCloud

med-vert-notag-wpcf_93x60Let’s face it!  We all have enough to worry about with our production environments!  We’ve got servers, storage, switches, fabrics, subnets, hypervisors, cabling – ugh cabling!  Anyways the point being is that having to worry about all of the complexity and components within our production data center is enough to drive any IT pro crazy – and yet we are still asked to magically duplicate all these worries in a secondary location for disaster recovery purposes – in the words of my 6yo “no fair”

Sure we’ve seen virtualization come along and make our lives easier – and in turn it’s made disaster recovery easier as well.  Encapsulating our physical servers into a group of files and abstracting hardware, aka a VM has opened up so many doors for us to simply pick up and move that VM elsewhere.  The problem with DR is that there is still so much more to consider – things like what do our networks look like in our secondary site, do our VMs get re-ip’d during a fail-over, and one of the most important aspects – when we fail-over how do we ever get back?  Wouldn’t it just be nice to simply take the worries of disaster recovery right out of your business?

OneCloud is all it takes!

OneCloud Software took the stage at VFD5 in Boston this June and began their pitch at just how they can solve the disaster recovery complexities inside organizations today.  Disaster Recovery is one of the biggest use-cases for cloud today – lately we have seen VMware make an entrance into the space, Veeam has given their partners the option to become a cloud service provider for their solutions, most disaster recovery solutions provide some sort of cloud-like integration. So how does OneCloud plan to enter into this market and what exactly can they bring to separate themselves and make them unique?  Their stance – Hey, DR is complex, cloud is challenging – OneCloud can abstract away the complexities of both, turning the public cloud into a secure extension of your data center.  In it’s basics, their first product OneCloud Recovery ties together your on premises VMware environment with Amazon AWS, and then replicates data as it changes, essentially duplicating your environment in Amazon.

Disclaimer: As a Virtualization Field Day 5 delegate all of my flight, travel, accommodations, eats, and drinks are paid for. However I do not receive any compensation nor am I required to write anything in regards to the event or the sponsors. This is done at my own discretion.

So with that in mind let’s take a deeper look at just how OneCloud Recovery works…

First up is planning – as with any project, planning is a very important phase and OneCloud does not take this lightly.  Their solution relies upon a lightweight tool they call OneCloud Software Insight.  Insight is a tool that installs within your on premises VMware environment – from their it discovers and maps out your infrastructure, including your VMs, storage and network.  It then determines what VMs are eligible for their DR solution, allows you to change policies applied, assigning greater or lesser RPO’s on a per-VM basis and uses that data to estimate costs within Amazon EC2.   Another feature of Insight I found interesting is it also takes into account the amount of bandwidth required compared to the amount of bandwidth available in order to get your data into the cloud while still maintaining the SLA’s and RPO’s you have setup within the application – this is something that I’ve not seen in a lot of other assessment/cloud readiness type tools.  At the end you are left with a very detailed cost breakdown of what your environment might cost to run in Amazon using OneCloud Recovery, taking into account all of the individual Amazon costs (EC2, S3, Glacier, etc) as well as the licensing costs of OneCloud Recovery.  Honestly this tool by itself provides some unique value and data to a customer, even if they don’t plan to leverage OneCloud Recovery.

On to the magic..

With planning out of the way, purchase orders issued,  and the trigger pulled OneCloud Recovery the magic can now begin – but it’s not magic, it’s tech – and this is how it all works…

First up is installation –  OneCloud Recovery is installed by simply importing an ova into your existing vSphere environment and providing two credentials; your vSphere and AWS credentials/keys. So far so good eh?  From there automation kicks in, your environment is discovered, mapped out and blueprinted and what they call the bootstrap process begins.  This process first creates a virtual private cloud within Amazon, and then deploys a couple of  appliances to both your on premises and Amazon environment.  Firstly the management appliance is deployed which essentially allows you to manage your OneCloud Recovery environment, which can be done from either on premises or within Amazon AWS due to its configuration and data being replicated to both sites.  Secondly we have worker appliances which are deployed at both ends – these are the heavy lifters of the solution which do all the moving and transfer of data, fail-over/fail-back, etc..  At then end of roughly 10 minutes or so, your complete virtual private cloud is built within Amazon and completely bridged to your on premises environment over a secure VPN tunnel that was established between the two sites – all automated!

Next we create our protection groups.  A protection group is essentially a policy that defines certain SLA’s/RPO’s etc.  Think of it in terms of a job that runs every 1 hour, every 2 hours, every 4 hours, etc.  Then we simply drag and drop our VMs into their designated protection groups depending on the RPO we wish to assign to the particular application.  And as you can see, the UI associated with OneCloud Recovery is very clean and looks like a joy to work with.OneCloudUI

With our VMs now assigned to a protection group the technology kicks in to high gear.  The protection process begins with our OneCloud Recovery worker first snapshotting the VM in order to free up it’s underlying disks.  It then converts the data into their own format which is highly compressed and stores it locally on some tier 2/3 storage on-site, then finally replicating the data to AWS, obviously performing a full seed first and subsequently leveraging vSphere’s Change Block Tracking from that point forward.

As far as fail-over options like many others OneCloud Recovery provides both test and live options to fail-over.  They do however offer a bit of uniqueness in this process.  We all know that compute in the cloud costs money right?  Imagine if we had an 8 vCPU, 32GB RAM production VM replicated into AWS – do you think we really need all that compute and memory during a fail-over test?  Probably not! OneCloud Recovery recognizes this and allows you power on your VMs inside of AWS in an undersized fashion!  I mean, that 8CPU/32GB RAM VM may function fine with 2vCPU/16GB of RAM with no load during a test, thus saving your money!

When it’s time to fail-over for real OneCloud Recovery also has some unique features!  While most disaster recovery solutions halt their protection during a fail-over action, OneCloud Recovery actually continues to provide you with options to protect your VMs, even while they are running in AWS.  They do so by leveraging Amazon EBS snapshots and protecting the data within your AWS region.

As far as fail-back is concerned OneCloud Recovery has some unique features as well.  When a primary data center comes back on line, OneCloud Recovery is able to determine if at all any existing data is still intact.  If it is, OneCloud Recovery can perform it’s fail-back in a delta fashion, moving only those changes that were made in AWS during the fail-over back to your primary data center, rather than having to extract entire VMs out of AWS which could be both costly and more importantly time consuming.

More to come from OneCloud?

In terms of OneCloud Recovery we are looking at a 1.0 product meaning there is a lot of development left to be done on the solution.  My personal requests – I’d love to see the software get a little more aggressive and customized in terms of it’s RPO’s.  Currently RPO’s are set in stone at 1,2,4,8,12,24 hours.  Aggressively I’d like to see some smaller numbers here – 1 hour of lost data could equate to quite a lot of money to a lot of enterprises.  Also I’d love to see some integration into more clouds other than just Amazon – think Google for example as there could potentially be some cost savings for customers there.

Looking at OneCloud Software as a company however I don’t think that OneCloud Recovery is going to be their only product.  They have a solid core technology that maps, discovers, and blueprints our on premises environments, then duplicates that to Amazon.  I don’t see this only being used for Disaster Recovery.  I had a chat with Marc Crespi, CEO and Co-Founder from OneCloud and brushed this past him – in which he didn’t confirm or deny any of it – but if I had to guess I can definitely see them exploring other areas in the future – think DevOps, Migrations, Hybrid Cloud, etc.  All areas that OneCloud Software’s core blueprint technology may be a good fit for.  But, this is all speculation on my part – but still watch these guys – they are on to something…

If you are looking for more info around OneCloud Recovery I definitely recommend checking out some of the other great community posts resulting from VFD5

OneCloud mentioned that they waited to go to market with OneCloud Recovery until they have met three internal requirements – it must be simple, it must be cost optimized and it must be a complete solution.  Honestly I think they have accomplished those goals with OneCloud Recovery.  The automation that has been built into the product, coupled with a very clean UI takes the cake in terms of simplicity – less knobs to turn = more simplicity!  Cost Optimization, well, the insight tool definitely gives you a great understanding of all the costs involved in using OneCloud Recovery, even taking into account that they can down-size VMs when running in Amazon.  And as for a complete solution they have for sure achieved what they set out to do – establish public cloud as an extension of your data center for disaster recovery purposes.