Tag Archives: DR

#VFD5 Preview – OneCloud

med-vert-notag-wpcf_93x60Am I looking forward to the presentation at Virtualization Field Day 5 from OneCloud?  I have no idea!  Why?  Well, here is a company that I know absolutely nothing about!  I can’t remember ever coming across OneCloud in any of my journey’s or conferences!  Honestly, I think this is the first company that is the only company that is presenting at VFD that I have absolutely no clue about what they do…

Disclaimer: As a Virtualization Field Day 5 delegate all of my flight, travel, accommodations, eats, and drinks are paid for. However I do not receive any compensation nor am I required to write anything in regards to the event or the sponsors. This is done at my own discretion.

That will certainly change fast

OneCloud will present at VFD5 on June 24th at 1:00 PM where I’m sure we will all be enlightened a little more on the solutions they provide.  That said I don’t like going in cold, knowing nothing about someone – thus, this preview blog post will at least help me understand a little bit about everything OneCloud has to offer…

So let’s start from the ground up.  OneCloud is essentially a management platform for a hybrid cloud play.  Their core technology, the Automated Cloud Engine (ACE) is the base to where they provide other services.  From what I can tell ACE essentially facilitates the discovery of your on premises data center, taking into account all of your VMs, physical storage and networking information.  From here, ACE can take different business objectives and transform these into API calls in order to essentially replicate all your infrastructure into the public cloud – for now, it appears to be just Amazon’s AWS which is supported.

The service running on top of ACE is OneCloud Recovery.  OneCloud Recovery allows organizations to facilitate a disaster recovery or business continuity solution involving the public cloud as the primary target – skipping costs and complexity of implementing a second or third site on premises.

Diagram-4

So here is how it all happens from start to finish – OneCloud is deployed into your environment, via the virtual appliance route.  Another instance is also deployed into Amazon.  From there it auto discovers your environment; your networking setup, storage configurations, data and applications are all tied together and somewhat of a blueprint of your environment is created.  You then use their policy engine to apply RTO and RPO objectives to your applications.  OneCloud will then provision a fully functioning virtual data center in Amazon – one that mirrors your environment in terms of networking and configuration.  OneCloud not only duplicates your environment into Amazon, but it will also optimize both your compute and storage in order to minimize costs.  Meaning it will scale down on CPU where it believes it can and place your data onto the most cost effective storage.  Once your data is there OneCloud performs ongoing replication in order to meet the RPO you have selected.  From there it’s just a matter of performing your normal DR tests and engaging in any failover (and failback) operations.

OneCloud seems to have some interesting technology and I’m looking forward to learning more at VFD5.  Some questions for OneCloud that come to mind – How do they compare to VMware’s vCloud Air DR services?   Do they plan on expanding out to other public clouds such as Google, Azure, or vCloud Air?  With a strong software base in ACE do they plan on moving outside just the DR/BC realm – things such as DevOps and public cloud labs come to mind.   I really like how they are abstracting away what can be some very complicated API calls to Amazon – any time a company provides a solution that involves simplicity it’s always a good thing, but especially so when dealing with the complex networking and configuration of public cloud and disaster recovery.  If you would like to learn more about OneCloud with me you can do so by watching the live stream on the VFD5 event page.  That stream, along with any other content created by myself will be posted on my VFD5 event page as well.

Backing up your vCenter DB – all three of’em

three-fingersWait!  What!  3!?!?!  Yes, you read correctly!  While in the days of vCenter 5.0 and below we only had to worry about 1 database, the release of 5.1 has tripled that!   This is something I hadn’t even thought about until recently attending a vBrownbag put on by Justin King (@vcenterGuy).

So there’s the SQL database from vCenter right – ok – no big, I know about that one and am backing it up – KB article on how to do that.

Then there is the SSO database – no problem, I knew about that one as well since I had to create it when I first upgraded to 5.1.  Again it’s a MS SQL DB, doesn’t change that much –  which is easy enough to backup…

But then Justin started talking about the Inventory service – remember, that’s the third requirement you had to install when upgrading.  Well guess what?  It has a database too!  It’s not SQL at all – it’s sitting on your vCenter Server in xDB format.  My first thought was what is even in this database – I can’t browse it like I can the SQL databases (or I just don’t know how to).  What I can gather from the What’s New docs and VMworld presentations the Inventory database holds things such as a read cache of all the objects that are accessed within the vSphere Web Client and all of your tags and categories that are setup from vCenter.  I’m sure there’s more but this is all I can find.

However back to my main objective, how do I back this thing up?  A little digging around and I found this KB article on how to backup/restore your vCenter Inventory database.  Basically it’s as follows.

Backing up the inventory database (WINDOWS)

  1. Navigate to the Inventory scripts folder (c:\Program Files\VMware\Infrastructure\Inventory Service\scripts)
  2. Run the following
    • backup.bat -file backup_filename

Restoring the inventory database (WINDOWS)

  1. Navigate to the Inventory scripts folder (c:\Program Files\VMware\Infrastructure\Inventory Service\scripts)
  2. Run the following
    • restore -backup backup_filename

So utterly simple yet so not talked about 🙂  Wait – but what if I’m using the vCSA?  Am I out of luck?  Absolutely not!  Use the following…

Backing up the inventory database (Linux)

  1. Navigate to the Inventory scripts folder (/usr/lib/vmware-vpx/inventoryservice/scripts/)
  2. Run the following
    • ./backup.sh -file backup_filename

Restoring the inventory database (Linux)

  1. Navigate to the Inventory scripts folder (/usr/lib/vmware-vpx/inventoryservice/scripts/)
  2. Run the following
    • ./restore.sh -backup backup_filename

So there you go!  You can now sleep at night knowing you aren’t going to lose all of your hard work setting up those tags!  Moral of the story – Pay attention and participate in the vBrownBags – there is always some great information and learning to be had.

New Orleans or Madrid? Veeam says take your pick…

techedFeel like a trip to New Orleans or Madrid?  Fancy yourself some TechEd craziness?  How’s about saving yourself a little bit of money and having Veeam purchase your ticket?  The guys in green are at it again with their monthly drawings and have placed a full conference pass to TechEd 2013 into the March slot.  So no excuse, get on over and enter the drawing now!  And the best part is, they don’t really slam you with email too bad after the fact either so no reason not to!  The winner will be announced March 18 so time is running out!

Veeam Backup and Replication Cloud Edition!!!

veeam_logoToday Veeam has announced the release of a new addition to their flagship Veeam Backup and Replication software.  This edition, labelled Veeam Backup Cloud Edition adds additional capability to automatically sync or copy your VMware or Hyper-V backups to the cloud.  And when they say cloud they really mean it.  This bad boy comes with support for 15 of the most popular cloud storage providers (Amazon, Glacier, Rackspace, Azure) making it very cost effective for customers to get some offsite disaster recovery type solutions in place without the need for a second site or even more infrastructure.

All or selected backups (and even other files you select) are compressed and deduped yet again, then sent to the cloud provider in a secure, encrypted fashion (up to 256 AES encryption).  Again, the support for the most popular cloud providers eliminate the need for you to learn complicated cloud APIs and just like every other Veeam product, this seems to be fairly easy and simple to setup, configure and use.  It provides the ability to schedule limits on your bandwidth used and provides some nifty reporting features as well to inform you that your backups have been successfully copied offsite.  Also included is a cost estimator to help you gauge just how much those monthly storage bills are going to be.  Have a look at the following video to see all of the new features in action…

And for those who can’t take the time for the video, a few screenshots below 🙂

Cloud Options Storage Cloud Selection Select Files to send to Cloud Main Screen File Purge Optoins Encryption Options Encryption Options 2 Delete After 30 Days - Keep them after Veeam deletes them

As per lisensing, current customers are able to purchase this as an additional product, on a per socket basis, just as they have purchased Backup and Replication.  New customers have the ability to bundle this with their purchase of Veeam Backup and Replication on a yearly subscription basis.  Buuuut, I’m know lisensing expert and do not work for Veeam so you should probably contact them for the details…

Nakivo the latest to offer NFR lisenses to VMware professionals for FREE!!!

nakivo logoIf you are a VCP, vExpert, VCI or even a VMUG member you have no excuse to not try out Nakivo.  In a press release yesterday Nakivo announced that they are now handing out free 2 socket NFR lisenses of the latest version (2) of their flagship product Backup and Replication.

In addition to their day-to-day duties, VMware professionals run their home and work labs to try and learn new software, verify new concepts and ideas, and run pre-production tests. “We are pleased to provide VMware professionals with a fully-functional, yet a free data protection solution for their home and work labs allowing them to drive innovations in business and technology, enhance their IT infrastructure, and improve professional skills,” said Bruce Talley, CEO at NAKIVO.

IMO this is a good move by all companies.  vExperts, VCPs, and VCI’s are on the verge of the latest technologies and often throw these products into their homelabs to evaluate and try out, which normally results in some free marketing and exposure to the companies as blog posts and tweets go flying out about the products.

So, if you qualify and are looking for your lisenses, head on over here and sign up to grab yourself a couple of sockets worth.  I know I will!!!

Veeam Job taking a crazy long time on Processing Configuration…Check your destination!

veeam_logoAs of late I’ve found myself engulfed in configuring and tuning some Veeam replication jobs.  As I’ve been monitoring these I’ve noticed that there were a few VMs that seemed to be taking what I considered too long to replicate.  After a little bit of investigation I realized that the majority of the job was spent on the Processing Configuration section of the Veeam job.

ProcessingConfigurationTimeNow after a look into the logs it appeared that the VM replica in question was actually living on a different datastore than was setup for the destination location of the job.  Also, when i went to browse that destination datastore, there were a ton of folders labelled VMName_Replica_1, Replica_2, Replica_3 and so on and so forth, around 50 or so :).  So, the solution to solve this issue is either to change your job destination to the proper datastore (where the replica actually is) or to Storage vMotion your replica to the original job destination.  I chose the latter as I had many other VMs in the job that already resided on the correct datastore.  Anyways, after that the next round of replication sat on Processing Configuration for 30 seconds instead of 40 minutes.

Moral of the story, if you are Processing Config for a long time have a look at your job definitions, more particularly your destination datastore and where your actual replicas are physically located.  You probably have a mismatch there.  No Storage vMotion – No Problem, check out Veeam’s Quick Migrate – it’ll do the same thing for you.