A Google Cloud to Amazon vMotion – The Ravello Way!

v2Ravello_Logo_largeToday Ravello Systems, a company based out of Palo Alto and Israel announced a new beta, a beta that I think is going to go over very well within the VMware community – one that will allow us to spin up vSphere labs, complete with vCenter Server, ESXi hosts, Domain Controllers, Storage and Network services and all the VMs that go with the punch inside of Google and Amazon’s cloud.  To be honest I was kind of skeptical when I first started working with Ravello?  I mean, come on, an ESXi host in Amazon, let alone and ESXi host running VMs inside of Amazon, an ESXi host running VMs with little to no performance penalty, all running within Amazon – you can see why I might of cringed a bit.  But Ravello gave me a shot to try it for myself – and during the introductory chat as they were showing me how things worked I thought, hey, what a use case for the new cross vCenter vMotion capabilities in vSphere 6!  A lab in Amazon, a lab in Google Cloud, and VMs migrating between them – how cool is that?

Who and what is Ravello Systems?

Now, before I get into the details of the vMotion itself I want to take a step back and explain a little bit about Ravello Systems themselves, and what they have to offer.  Ravello was founded in 2011 with the sole purpose of supporting and driving nested virtualization to the next frontier and did so when they launched their product globally in August of 2013 (You had to of seen the scooters at VMworld :) )  They didn’t just want to simply provide an environment for nested virtualization though, they wanted to make it simple and easy for companies to replicate their data center infrastructure into the public cloud.  The core technology behind all of this is their HVX hypervisor – essentially acting as a Cloud VM, sitting in either Amazon or Google and providing overlay networking and storage to the VMs that are placed on top of it.

RavelloHVX

As per the diagram above the VMs present can be built from scratch or imported via an OVA within Ravello’s very easy to use intuitive interface – but perhaps more interestingly you can utilize the Ravello Import Tool(??), point it to your ESXi host or vCenter, and import VMs directly from your environment into the cloud!  But they don’t stop there, Ravello can also detect and create every network your VM is attached to, deploying an exact duplicate of your network infrastructure!  Now if this wasn’t good enough for you the beta today announces the ability to support Intel VT through HVX – which means we can now run VMs on top of ESXi on top of HVX on top of Amazon or Google!  True inception leaving us with a setup shown in the diagram below.

RavelloHVXVT

A great place to break things!

There is a reason why Ravello dubs their technology as having the ability to create “Smart Labs”!  Throughout my early access to the solution I broke and fixed so many things within my applications – and Ravello always gave me a way to rebuild or reconstruct my labs in a very efficient manner.

RavelloSaveToLibraryFirst up we are able to save our VMs to the library – which is essentially a personal set of VMs and images that we can re-use in all of our applications.  For example I only had to build my ESXi 6.0 image once – after saving this to the library I was able to simply drag and drop this VM as many times as needed to as many applications as needed, simply re-ip and re-naming after I was done.

RavelloSaveToBlueprintHaving the ability to re-use VMs is cool but the blueprint functionality that Ravello provides is really where I see value!  We are able to take a complete application, in my instance an ESXi host, domain controller, vCenter Server, etc and save the entire application as a blueprint.  Blueprints are then available to be used as starting points for new applications – meaning I can build a complete lab on Amazon, save as a blueprint, and then publish a new application to Google which is an exact identical copy, networks and all.  Blueprints are an excellent way to test out the different public clouds as well as version or snapshot your entire lab before making any major changes – if things go awry you can simply republish your saved blueprint to a new application.

RavelloBlueprints

Enough talk – Let’s see the vMotion!

Alright!  Let’s get to it!  Let me first warn you, the environment I built to do this was quick and dirty – not a lot of polishing going on here.

The two applications we will be using are Google-vxlan and EC2-vxlan – I’ll let you guess which public clouds each is published to.

ravellovmcanvas

As shown above these applications are pretty similar; each containing an Ubuntu server (used to establish the vxlan tunnel between EC2 and Google), a pfSense appliance that provides a VPN for my vMotion networks, a vCenter Server (the Windows version), and an ESXi host (just one for now).  The EC2 application also contains a jumpbox VM which provides entry into the local network as well as DNS services.

ravelloNetworkingboth

As far as networking goes the setup at both Amazon and Google is almost identical with the exception of the jumpbox.  The 192.168.0.0/24 network is available at both EC2 and Google.  The 10.0.0.0/24 network is the only network that is routed to the internet, therefore my only access into the labs outside of the Ravello GUI – this is why the jumpbox also has a connection to this network – to act as an RDP gateway of sorts.  The two Ubuntu servers have an elastic public IP attached to them in order to ensure the public IP doesn’t change and mess up my vxlan config.  The free trial of Ravello gives you two elastic IPs, and four other DCHP public IPs (subject to changing every now and then).  The vxlan tunnel is established between the two elastic IPs in order to provide Layer 2 connectivity between Amazon and Google.  The pfSense boxes each have a dynamic public IP attached to them with an IPSEC tunnel established between the 192.168.1.0/24 and the 192.168.2.0/24 networks.

vsphereshotOn the VMware side of things I have two vCenters with embedded PSCs (i know – bad practice) – one in Amazon and one in Google, which are attached to the same SSO domain and configured in Enhanced Linked Mode.  Therefore whatever is at Google can be seen at Amazon and vice versa.  As far as vMotion goes I’ve simply enabled this one my existing management interfaces (more bad practice – but hey, it’s a lab).  There is local storage attached to the ESXi hosts and one VM named EC2-VM1 present.

So my goal was to migrate this VM from Amazon to Google and back again, taking both the compute and storage with it.  Now just writing about a vMotion is not that exciting so I included a video below so you too can see it move :)  It’s my first attempt at a video and had some screaming kids while I made it so yeah, no narration – I’ll try and update with a little tour of the Ravello environment later :)

So there you have it – a VM moving from Amazon to Google and back, all while maintaining its’ ping response – pretty cool!

Is Ravello worth it?

esxi-home-labSo, with all this the question now remains is Ravello worth the cost?  Well, considering as how Ravello estimates the cost of a two ESXi Node, vCenter and Storage lab to be on average $0.81 – $1.71 per hour (usage based, no up front costs) I would certainly say it is!  The ability to run nested ESXi hosts on top of the public cloud provides a multitude of use-cases for businesses – but honestly I see this being a valuable tools for the community.  I plan on using Ravello solely for my home lab usage over the next year or so – it’s just so much nicer to break things and simply re-publish an application than it is to try and rebuild my lab at home.  If you want to give Ravello a shot you can sign up for the beta here – Even after the beta expires you simply swipe your credit card and pay Ravello directly – no Amazon accounts, no Google bills – just Ravello!  You will be limited during the beta’s and free trials in the amount of CPU, RAM and concurrent powered on VMs but they definitely give you enough resources to get a decent lab setup.

Ravello has a great solution and certainly expect more from me in regards to my lab adventures in the public cloud.

Disclaimer: Ravello gave me early access to their ESXi beta in order to evaluate their solution – I first signed up for a free trial and did get the amount of RAM and number of VMs available to me increased.  They didn’t however require that I write this post nor write anything for that matter, or provide any $$$ for anything that was written – these are all my words!

Veeam announces GA of Veeam Endpoint Backup

During their first inaugural VeeamON conference last October Veeam announced the beta of Veeam Endpoint Backup.  I wrote a little overview in regards to Endpoint Backup in case you need a refresher.  Now, Veeam’s Backup and Replication has long been infamous for being purpose built for the virtual data center, and Endpoint Backup is the companies answer to bringing the same great Veeamy-tech to your physical laptops and desktops.  Today, that announced beta has ended and Veeam Endpoint Backup is now generally available.

So what’s changed since the beginning of the beta?

VeeamEndpointA lot actually!  Being in beta for 6 months has really helped Veeam to ensure that they are releasing a genuinely, tried and tested, rock solid product into the market.  In fact, throughout the beta many of the new features now included in Endpoint Backup were suggested by users just like you and me on the community forums surrounding the beta.  Veeam, like always have done a great job taking into account user feedback and delivering a product that’s packed full of useful features and “just works”.  There are a lot of features to VEB and you can see them all here - but, I’d like to go over a few of my favorites.

Integration between VEB and VBR

Coupling Patch #2 of Veeam Backup and Replication (released later this month) alongside the GA of Veeam Endpoint Backup brings some awesome functionality of being able to monitor, control and restore endpoint backups within VBR.  By backing our endpoints up directly inside a Veeam backup repository we are now able to take advantage of many of the traditional VBR restore goodies with our physical backups.  Aside from simply file level recovery, application items, such as being able to restore SQL tables, Exchange and Active Directory objects – they can all be performed on our physical backups now as well.  Although the product is geared towards endpoints, meaning desktops and laptops, I see no reason why you couldn’t install it on some of those last physical servers you have laying around.  In fact, Veeam says themselves that although it isn’t built for servers it will work on Server 2008 and above.

VeeamEndpointToVBR

Veeam has added the ability to export our physical disks from the backups directly into a vmdk, vhd, or vhdx file as well.  Now this isn’t a true P2V process, they aren’t removing any drivers or services or preparing the disk to be virtual in any way – this isn’t their intention.  This is simply another way to recover, another way to get the data you need – and honestly, if you wanted to try and build a VM out of these exported disks I’m sure there will be posts around the process out there in the next few months on how to do so.

SecPermissions

In terms of security Veeam has added the ability for administrators to set access restrictions on their backup repositories.  What this does is allows us to grant access to certain repositories to certain users, while restricting access to others.

Aside from the new integration, Veeam Endpoint Backups which are stored in a Veeam backup repository can also take advantage of existing VBR features, such as encrypting your backups, traffic throttling, monitoring incoming backups, email status alerts and support for Backup Copy and Tape jobs to get those backups offsite.

It’s not just about B&R

Sure, the integration’s with VBR are pretty cool but they aren’t the only thing that’s included.   Yeah, we have all of the traditional endpoint backup features like incremental’s, multiple target options, and scheduling but it wouldn’t be a Veeam product without a few extra goodies baked in.  I’m not going to go in depth about them all, but listed below are a few of my favorites

Full support for Bitlocker drive encryption – This gives you the ability to de-encrypt your Bitlocker backups before restoring, directly from with the Endpoint GUI.

Ability to control the power state of computer post backup – If you have your computer set to backup at the end of your work day, you can leave knowing that once your backup has completed Veeam will, in true green fashion, power down your workstation.

Backup triggers such as “When backup target is connected” – Veeam will monitor for when you plug in that external USB drive or connect to the network that you have setup as your backup target and can trigger the backup process immediately there after.

Support for rotated USB drives – If you want to rotate your backups on one USB drive one week and another the next, Veeam Endpoint Backup can handle this for you, allowing you to backup to one drive while the other goes offsite.

On-battery detection – Backups can be automatically prevented from starting when Veeam detects that your laptop is running on-battery and contains less than 20% run time – ensuring VEB doesn’t chew up valuable power in your time of need :)

So what hasn’t changed?

freeWe talked about what has changed since the beta bits were first shipped in November but perhaps the most important and most cared about feature lands in the “What hasn’t changed?” category.  What hasn’t changed is that Veeam Endpoint Backup was put into beta as a free product and will remain free now that it is generally available.  Veeam has a long history of providing free tools for the community, they have Backup and Replication Free, SQL/Active Directory, Exchange Explorers are free, the old FastSCP which was free and now Veeam Endpoint Backup Free!  There should be no barrier to stopping you from going and checking out VBR for yourself.

Now in my VeeamON post I tried to determine the future of this product, where it would fit in, what features Veeam would add to it – and honestly I was way off on a lot of them – but one I was sure would come would be the integration with Backup and Replication – and it’s here now!  Do I think Veeam are done innovating in this area?  Absolutely not!  From my experiences Veeam is a company that never stops moving.  I’m excited to see Veeam Endpoint Backup go GA, and I’m excited to see what the future holds.

Friday Shorts – Certs, Tools, Loads, VVOLs and #SFD7

It’s been quite a long time since my last “Friday Shorts” installment and the links are certainly piling up!  So, without further ado here’s a few tidbits of information that I shared over the last little while…

A little bit of certification news!

VMware LogoVMware education and certification has certainly taken it’s fair share of backlash in the last few months, and honestly it’s rightly deserved!  People don’t like when they invest in a certification, both in money and time, just to have an expiry date placed on all their efforts!  Either way, that’s old news and nothing is changing there.  What I was most concerned about was whether or not I would be able to skip my upgrade of my VCP and just take a VCAP exam instead, which would in turn re-up my VCP.  Then the announcement of no more VCAP was made – which through those questions of mine for a loop – but now, after this announcement it appears that their will be an upgrade/migration path for those current VCAP holders to work towards the newly minted VCIX.  Have a read and figure out where you fit in and start planning.   I already hold a VCAP5-DCA so by taking the design portion of the VCIX I would be able to earn my VCIX certification in full – sounds good to me!  Now we just need the flipping exams blueprints to come out so we all can get to studying! :)

New version of RVTools!

rvtoolsYup, the most famous peice of “nice to haveware” has an updated version.  I’ve used RVTools for quite some time now – as an administrator any piece of free software that I can get to help me with my job is gold!  RVTools saves me a ton of time when gathering information as it pertains to my virtual environment and my VMs.  If you haven’t used it definitely check it out – if you have, upgrade – you can see all of the new changes and download here!!

KEMP giving away LoadMaster!

kempKeeping on the topic of free tools let’s talk about KEMP for a moment!  They are now offering their flagship KEMP LoadMaster with a free tier!  If you need any load balancing done at all I would definitely check this out!  Now, there is going to be some limitations right, nothing in this world is completely free :)  Certainly it’s only community supported and you can only balance up to a maximum of 20 MB/s – but hey, may be a great solution for your lab!  Eric Shanks has a great introduction to how to get it up and going on his blog so if you need a hand check it out!  I’ve also done up a quick review a few months back on load balancing your LogInsight installation with KEMP.  Anyways, if you are interested in checking it go and get yourself a copy!

You got your snapshot in my VVOL!

As my mind wanders during the tail end of the NHL season I often find my mind racing about different things during the commercial breaks of Habs games – this time I said to myself, self, do snapshots work the same when utilizing the new VVOL technology.  Then myself replied and it said, hey self, you know who would know this answer, Cormac Hogan.  A quick look at his blog and low and behold there it was, a post in regards to shapshots and VVOLs.  If you have some time check it out – Cormac has  a great  way of laying things out in quick and easy to follow blog posts and this on is no exception.  In fact, before the first place team in the eastern conference returned from the tv timeout I had a complete understanding of it – now, back to our regularly scheduled programming.

 #SFD7 – Did you see it?

SFD-Logo2-150x150It appears that most if not all the videos from Storage Field Day 7 have been uploaded from the Silicon Valley internets into the wide world of YouTube!  There was a great list of delegates, vendors and presenters there so I would definitely recommend you check them out!  There were crazy hard drive watches, fire alarms, and best of all, a ton of great tech being talked about!  IMO the show could of done with just a few more memes though :)  With that said you can find all their is to know about Storage Field Day 7 over at GestaltIT’s landing page!

Rock the vote! Top vBlog voting now open!

2014_Award-Banner_Top-25-smallIt’s that special time of year again – a time for the virtualization community to come together and vote for their favorite virtualization blogs.  Yes – the Top vBlog Voting for 2015 is underway over at vSphere-land.com.  As much as this is just simply a ranking of blogs I’m not going to lie – it feels great to be recognized for all the work that I put into this blog and I appreciate each and every vote and reader that I have here on mwpreston.net.  This will be my forth year participating in the Top vBlog voting and honestly I’m so humbled by the way things have turned out.  In 2012 I put myself out there in the contest and came in at #125, 2013 I moved up to a whopping #39, and last year, 2014 I landed in spot #20 (wow!)  Thank you all for the support!

That’s one small step for man, one giant…I have a dream!

black-sheep-1996-02-gI know the sub title above doesn’t make much sense but wanted to somehow sneak a picture of Farley into this post, so there’s that!  – Seriously though, if you are a reader of this blog, or any blog on the vLaunchpad for that matter be sure to get over to the survey and vote!  Help pay respects and give recognition to the bloggers that spend countless hours trying to bring you relevant and useful information.  Be sure to read this post by Eric Siebert outlining a few tips and things to keep in mind while voting.  This isn’t a popularity contest – vote for the blogs you feel are the best – and if you aren’t sure, take a look back at some of the content they’ve produced over the past year.  Eric has links and feeds to over 400 blogs (insane!) on the launchpad if you have a spare 3 or 4 days :)

Speaking of Eric

Don’t forget to give huge thanks and props out to Eric for the time that he spends putting this thing together.  I can’t imagine the amount of work that goes into maintaining something like this.  Honestly I don’t know how he keeps up with it all, the linking, etc.  I have a hard enough time going back through my drafts and creating hyperlinks :)  So props to you Eric and Thank You!  Also, reach out to the wonderful folks at Infinio and thank them for once again sponsoring the Top vBlog Voting!  A lot of what goes on within the community wouldn’t be possible without sponsorships and help from all of the great vendors out there!

You have until March 16

That’s right, this whole thing wraps up on March 16 so make sure you get your choices in before then.  You will find mwpreston dot net front and center on the top of your screen once you start the survey (just in case you are looking for it :)).  Obviously I’d appreciate a vote but be true to yourself, if you don’t think I deserve it, skip me and move on to someone you think does :)

mwpreston dot net vote

I tend to use the Top vBlog Voting as a time to reflect back on what I’ve accomplished over the last year and 2014 was a super one for me!  I had the chance to attend a couple new conferences –  VeeamON, and Virtualization Field Day 4 – all of which I tried my best to cover on this blog.  I’ve also been doing a lot of writing for searchVMware.techtarget.com which has been a blast (if you are looking for a best news blog vote, check them out).  No matter where I end up it’s simply an honor to be part of this community and to have made so many new friends from across the world!  So here’s to an even better 2015

Tech Field Day – #VFD4 – VMTurbo Putting a price on resources!

VMTurboLogoSmVMTurbo closed off the first day at VFD4 in Austin, Texas with an overview and deep dive into their flagship product Operations Manager.  This was one of the presentations that I was most looking forward to as my fellow Toronto VMUG Co-Leader, fellow Canadian and good friend Eric Wright was involved in it, and for the first time I got to see Eric on the “other side of the table” speaking for a vendor.

 Disclaimer: As a Virtualization Field Day 4 delegate all of my flight, travel, accommodations, eats, and drinks are paid for.  However I do not receive any compensation nor am I required to write anything in regards to the event or the sponsors.  This is done at my own discretion.

Demand-Driven Control for the Software-Defined Universe

Eric started off by prompting everyone’s thoughts around what exactly Operations Manager is– not by talking about the product or what it can do, but by briefly explaining a motto that VMTurbo has been built around – Demand Driven Control for the Software Defined Universe.  I know, it’s a long one but in essence it’s something that is lacking within the industry.  With the Software Defined X being introduced into our data centers, it has brought with it many benefits, and perhaps the biggest being control – we can now have software controlling our storage, software controlling our network, and in the case of automation, we have software controlling our software.  And as Eric pointed out this control is great, but useless if there is no real consistency or leverage behind whatever is controlling it – and in fact, having the demand, having our infrastructure be the driving factor behind this control is truly the answer.  VMTurbo’s Operations Manager is a product that helps us along our path to the Demand-Driven Control for the Software-Defined Universe and it does so in it’s own unique way…

Desired State – Datacenter Nirvana

herethereBefore we get into VMTurbo’s unique take on operations management I first want to talk a little bit about desired state.  Looking after a virtual datacenter we are always looking to bring our VMs, our applications, our workloads into what we consider a desired state.  This desired state essentially combines both availability and performance together, all while maximizing the efficiency of the resources and infrastructure that we have to work with.  We begin to see anomalies and performance issues when we veer away from this desired state, and traditionally, we, as administrators are tasked with bring our workloads back into that desired state.  VMTurbo states that this is where the problem lies – this human interaction both takes time – time for humans to find out about this shift, as well as time for humans to try and put the puzzle back together and get back to the desired state.  VMTurbo essentially takes the human interaction out of this equation – allowing software, in this case Operations Manager to both detect the shift from desired state but also, and more importantly take action towards moving your environment back to the desired state – thus the “control” part of the Demand-Driven Control.

And the Demand-Driven part?

This is where we see the uniqueness of VMTurbo’s Operations Manager shine through.  With Operations Manager in control, making the decisions of what VMs should run where, etc. it needs a way to look holistically at your environment.  How it does this is by taking an economic model and applying that to your infrastructure, essentially turning your datacenter into a supply chain.  Every entity of your environment either supplies or demands resources, and just as in economics when there are a lot of resources available, things are a bit cheaper.  As resources go down, things begin to get a lot more expensive.

VMTurbo-Marketplace

So in terms of a VM demanding resources Operations Manager calculates the cost of those resources, again, holistically across your entire environment to determine just how those resources should be provisioned.  Think of adding more disk to a VM –  you need to look at where the disk will come from, how expanding that disk will affect other consumers (VMs) on the same datastore, how the extra capacity will affect other suppliers such as your storage array, your LUN, etc.  Operations Manager calculates all of this information in real time to determine how to best provision that storage capacity to the VM and takes action if need be to free up resources or create more supply, all while maintaining the desired state of all of your applications.

Operations Manager also goes deeper than just the VM when determining who it’s buyers are.  Through the use of WMI, SNMP, or by simply importing metrics from a third-party tools Operations Manager is able to discover services inside of your operating systems and throw them into the crazy economic market as well.  Think of things like Tomcat servers, Java Heaps, SQL Server, etc.  These are processes that may affect the demand for memory, and without insight into them making a recommendation for more memory on a VM isn’t going to help anything.  By taking these granular metrics and statistics from inside of your VMs operating system, Operations Manager can give a complete recommendation or action that will best suit your application, VM, and entire infrastructure.

It still does all the other stuff

Now VMTurbo’s supply chain model definitely sets it apart from other monitoring tools.  Also, the fact that Operations Manager can take action automatically also is a big plus when comparing the product to other tools but  you may be asking yourself what about all of the other stuff that most monitoring tools do today?  Well, Operations Manager does that as well.  Items such as right-sizing a VM, taking away or granting CPU to a VM, placement, capacity planning, etc.  Operations Manager does all of this an in fact it also applies these actions to its supply chain model, allowing the software to see just how granting another 2 vCPUs to a VM will “disrupt” the market and decide whether or not that change is “worth it”.  Operations Manager also has some decent networking functionality built-in as well.  By figuring out which VMs are “chatty” or communicating each other often, Operations Manager can make the recommendation to move these VMs onto the same host, eliminating any performance degradation or latency that could occur by having the communication move out across your network.

When VMTurbo takes action it does so in a manner of a recommendation or an action – meaning we can have the software recommend the changes to the user or we can have the software go ahead and take care of the issues itself.  Honestly this is a personal preference and I can see customers probably using a mix of both.  When calculating these recommendations and actions Operations Manager also places a transaction cost on any move it makes.  What this does is alleviate VMs from essentially bouncing back and forth between hosts trying to achieve their desired state.

Operations Manager really looks like a slick product which takes a different stance on monitoring and healing your infrastructure.  Having the application that is doing the watching do the actual doing makes sense to me, eliminating the need for the human interaction which in turn eliminates risk and certainly increases the time it takes to get back to desired state.  And I know I’ve specifically geared this post towards vSphere but honestly VMTurbo supports just about everything – think OpenStack, Azure, Hyper-V, AWS, vCloud – it’s got them all covered.  If your interest has at all peaked I encourage you to watch all of the VMTurbo #VFD4 videos here – or better yet, get yourself a trial version and try it out yourself.  Oh, and this just in – get your name in on a home-lab giveaway they are having in respect to their newest launch.

Tech Field Day – VFD4 – StorMagic A VSA living on the edge

StorMagic_Wordmark_Black_RGB_hiBefore we get too far into this post let’s first get some terminology straight.  StorMagic refers to your remote or branch offices as the “edge” – This might help when reading through a lot of their marketing material as sometimes I tend to relate “edge” to networking, more specifically entry/exit points.

 Disclaimer: As a Virtualization Field Day 4 delegate all of my flight, travel, accommodations, eats, and drinks are paid for.  However I do not receive any compensation nor am I required to write anything in regards to the event or the sponsors.  This is done at my own discretion.

StorMagic, a UK based company was founded in 2006 and set forth to develop a software based storage appliance that enterprise can use to solve one big issue – shared storage at the edge.  StorMagic is another one of those companies that presented at VFD4 that really had a strong sense for who their target markets are – they aren’t looking to go into the data center (although there is no reason that they can’t), they aren’t looking to become the end all be all of enterprise storage (although I’m sure they would love that) – they simply provide a shared, highly available storage solution for those companies that tend to have many remote branch offices with a couple of ESXi (or Hyper-V) hosts.  On the second day of VFD4 in a conference room at SolarWinds, StorMagic stood up and explained how their product, SvSAN can solve these issues.

Another VSA but not just another VSA

Choosing between deploying a traditional SAN vs. a VSA is a pretty easy thing to do – most the times it comes down to the shear fact that you simply don’t have enough room at your remote site to deploy a complete rack of infrastructure, nor do you have the resources on site to manage the complexity of a SAN – so a VSA presents itself as a perfect fit.  With that said there are a ton of VSA’s out on the market today so what is setting StorMagic apart from all the other players in the space?  Why would I chose SvSAN over any other solution?  To answer these questions let’s put ourselves into the shoes of a customer in StorMagic’s target market – a distributed enterprise with anywhere from 10 to 10000 remote “edge” offices.

One of the driving forces behind SvSAN’s marketing material is the fact that you can setup your active/active shared storage solution with as little as 2 nodes. 2 – Two.  Most VSA vendors require at least a three node deployment, and justifiably they do this to prevent a scenario called split-brain.  Split-brain is a scenario where nodes within a clustered environment become partitioned, with each surviving node thinking it’s active, which results in a not so appealing situation.  So how does StorMagic prevent split-brain scenarios with only two nodes?  The answer lies in a heartbeat mechanism called the Neutral Storage Host (NSH).   The NSH is recommended and designed to run centrally, with one NSH supporting multiple SvSAN clusters.  Think one NSH supporting 100 remote SvSAN sites.   The NSH communicates back and forth with the SvSAN nodes in order to determine who is up and who is down, thus being the “tie breaker” if you will in the event the nodes become partitioned.   That said the NSH is an important piece to the SvSAN puzzle and it doesn’t necessarily need to run centralized.  For those sites that don’t have good or any bandwidth, the NSH can be run on any Windows, Linux, Raspberry Pi device you want, locally at the site.  Beyond the heartbeat mechanisms of the NSH, SvSAN also does a multitude of things locally between the two nodes to prevent split brain as well.  It can utilize any one of its networks, be it management, iSCSI, or mirroring network to determine and prevent nodes from becoming partitioned.  So with all this what advantages come from not requiring that third node of compute within the cluster – well, one less VMware license, one less piece of hardware you have to buy and one less piece of infrastructure you need to monitor, troubleshoot and backup – which can add up to a pretty hefty weight in loonies if you have 10000 remote sites.

stormagic-sync

Aside from lowering our infrastructure requirements SvSAN brings a lot of enterprise functionality to your remote sites.  It acts in an active/active fashion, synchronously replicating writes between each node.  When a second node of SvSAN is introduced, a second path to the VMs storage is presented to our hosts.  If at any time one host fails, the other host containing the mirrored data can pick up where it left off, which essentially allows VMware HA to take our VMs that were running on local storage on the failed host, and restart them on the surviving host using local storage.   While the failed node is gone, the surviving SvSAN journals and writes meta data around the changes that occur in the environment, minimizing the time that it will take to re-synchronize when the original node returns.  That said the original node isn’t required for re-synchronization – the benefits of the SvSAN architecture allow for the second node to come up on different hardware or even different storage.  This newly added node will be automatically configured, setup and re-synchronized into the cluster, same goes for the third, the fourth, the fifth node and so on, with just a few clicks.

As far as storage goes, SvSAN can take whatever local or network storage you have presented to the host and use that as their datastore.  The appliance itself sits on a datastore local to the host, somewhere in the terms of 100GB – from there, the remaining storage can be passed straight up to SvSAN in a JBOD, RDM, or “vmdk on a datastore” fashion.  SvSAN also gives us the ability to create different storage tiers, presenting different datastores to your hosts depending on the type of disk presented, be it SATA, SAS, etc.  In terms of SSD, SvSAN supports either running your VMs directly on solid state datastores, or you can carve up SSD tier to be used as a write-back cache to help accelerate some of those slower tiers of storage.

plugin-front-page_462x306

In terms of management, StorMagic is fully integrated into the vSphere Web Client via a plug-in.  From what I’ve seen, all of the tasks and configuration that you need to perform are done through very slick, wizard driven menus within the plug-in, and for the most part StorMagic has automated a lot of the configuration for you.  When adding new nodes into the VSA cluster, vSwitches, network configurations, iSCSI multipathing – they are all setup and applied for you – when recovering existing nodes, surviving VSA’s can push configuration and IQN identifiers down to the new nodes, making the process of coming out of a degraded state that much faster.

Wait speaking of VMware

VMware LogoWorse transition ever but hey, who better to validate your solution than one of the hypervisors that you run on.  As of Feb 4th, VMware and StorMagic have announced a partnership which basically allows customers to couple the new vSphere ROBO licensing with a license for SvSAN as well.  Having VMware, who took a shot at their own VSA in the past (ugh, remember that!) chose your product as one they bundle their ROBO solutions with has to be a big push of confidence for both StorMagic and their potential customers.  You can read more about the partnership and offering here – having both products bundled together is a great move on StorMagic’s part IMO as it can really help push both adoption and recognition within the VSA market.

Should I spend my loonies on this?

IMO StorMagic has a great product in SvSAN – They have done a great job in stating who their target market is and who they sell to – and defending questions to no end with that market in mind.  HA and continuous up time is very important to those enterprises that have distributed architecture.  They’ve placed these workloads at the “edge” of their business for a reason, they need the low latency, and honestly, the “edge” is where a company makes their money so why not protect it.  With that said I see no reason why an SMB or mid market business wouldn’t use this within their primary data center and/or broom closet and I feel they could really benefit by maybe focusing some of their efforts in that space – but that’s just my take, and the newly coupled VMware partnership, combining SvSAN with the ROBO licenses kind of de-validates my thinking and validates that of StorMagic – so what do I know Smile. Either way I highly recommend checking out StorMagic and SvSAN for yourself – you can get a 60 day trial on their site and you can find the full library of their VFD4 videos here.