Tag Archives: Backup

Setting yourself up for success with Veeam Pre-Job Scripts

For a while Veeam has been able to execute scripts post-job, or after the job completes – but it wasn’t until version 8 of their flagship Backup and Replication product that they added the ability to run a pre-job script, or a script that will execute before the job starts.  When v8 first came out with the ability to do this I strived to try and figure out what in the world I would need a pre-job script for – and for the longest time I never used it in any of my environments.  If a job failed I would execute post job scripts to run and hopefully correct the reason for failure – but a while back it kind of dawned on me – and with a bit of a change in mindset I realized something – Why fail first?


Why fail when success is possible?

As I mentioned above I’d grown accustom to using post-job scripts to correct any failing jobs.  For instance, there were times when for whatever reason a proxy would hold on to a disk of one of my replica’s – subsequently, the next run of this job would fail trying to access this disk – and even more importantly consolidation of any VMs requiring it would fail as the original replica couldn’t access the disk mounted to the proxy.  What did I do to fix this?  Well, I added script that executed post-job looking to simply unmount any disks off of my Veeam proxies that shouldn’t be mounted.

Another scenario – I had some issues a while back with some NFS datastores simply becoming inaccessible.  The fix – simply remove and re-add them to the ESXi host.  The solution at the time was to run a post-job script in Veeam.  If the job failed with the error of not being able to find the datastore then I ran a script that would automatically remove and re-add the datastore for me – Next job run everything would be great!

“Fail and Fix” or “Fix and Pass”

So, the two solutions above, while they do fix the issues they do it after the fact – after we have already failed.  Even though it fixed everything up for the next run of the job I’d still lose that one restore point – and sure enough, the time WILL come where it’s that exact point in time you will need to recover from!  The answer to all this is pretty simple – migrate your post-job scripts to pre-job scripts.  Let’s set ourselves up for success before we even start our job!  Although this may seem like common sense – for whatever reason it took a while before I saw it that way.

So with all that – hey, let’s add some code to this post.  Below you will find one of my scripts that runs before each Veeam job – my proactive approach to removing non-veeam proxy disks from the proxies!

Add-PSSnapin VeeamPSSnapIn
Add-PSSnapin VMware.VIMAutomation.core
Connect-VIServer vcenter.mwpreston.local -u username -pass password 
# get job name out of parent process id
$parentpid = (Get-WmiObject Win32_Process -Filter "processid='$pid'").parentprocessid.ToString()
$parentcmd = (Get-WmiObject Win32_Process -Filter "processid='$parentpid'").CommandLine
$jobid = $parentcmd.split('" "')[16]
$vbrjob = get-vbrjob | where-object { $_.Id -eq "$jobid" }
#get some info to build replica VM names
$suffix = $vbrjob.options.ViReplicaTargetOptions.ReplicaNameSuffix
$vms = $vbrjob.getObjectsInJob()
#create array of replica names
$replicasinjob = @()
foreach ($vm in $vms)
 $replica = $vm.name+$suffix
 $replicasinjob += $replica
#loop through each replica and check Veeam proxies for foreign disks
foreach ($replicaitem in $replicasinjob)
 $replica = $replicaitem.tostring()
 Get-VM -Location ESXCluster -Name VBR* | Get-HardDisk | where-object { $_.FileName -like "*$replica*"} | Remove-HardDisk -Confirm:$false

So as you can see this is a simple script that basically retrieves the job name it was called from (Lines 7-10) – By doing it this way we can reuse this block of code in any of our jobs.  Then simply searches through all of the disks belonging to Veeam proxies (Line 28) – if it finds one that belongs to one of our replica’s we are about to process, it removes it.  Simple as that!  Now, rather than failing our job because a certain file has been locked, we have set our self up for a successful job run – without having to do a thing!  Which is the way I normally like it 🙂  Thanks for reading!

Vembu BDR in the lab – Part 2 – Recovery!

vembu-logoIn part 1 of this review we’ve backed up our data and ensured that it is restoreable.  Now let’s continue our review of the Vembu BDR suite by having some fun and actually start restoring some data!   That said, before we jump right into performing a restore I want to talk a little bit about a nifty technology that BDR uses for most of its’ restores– the Vembu Virtual Drive.

What’s the Vembu Virtual Drive

The Virtual Drive is a technology that is installed when we initially install the BDR suite and is essentially a second hard drive on our BDR server.  This drive is used by Vembu to hold metadata, or pointers if you will to mounted backups on our BDR server, therefore it requires very little extra space, only 512MB to be exact as it just points to already created backup files.    As we can see below, when a backup is mounted Vembu actually makes your backups available in many different formats.  For instance, the backup shown below is for a physical windows server, however Vembu presents us with this data in both VMware and Hyper-V formats (vmdk, vhd) as well as IMG formats (ISO).  Although displaying the backups in these various formats is in itself not magic, the fact that BDR does this within seconds is truly incredible – and not just for full backups either.  Incremental backups can be displayed and restored to the various formats in seconds as well!  The secret sauce to all of this lies in the Vembu Hive file system – which provides the means for BDR to efficiently expose it’s incremental backups a virtual full backups without having migrate or merge any data at all – meaning you can get at your data almost immediately and restore to almost any platform with ease!


Vembu uses these various formats presented by the Virtual Drive technology to power most all of it’s restores, and this is how BDR can take a physical server and convert it to a virtual machine – or a VMware VM and convert it to a Hyper-V VM.  This is a functionality I’ve not seen in many other backup products and is very useful in my opinion!  Thankfully, this functionality is exposed within the UI as well, so if we wanted to manually mount a backup without doing a restore we could do so by simply clicking the ‘folder’ icon within the mount section of the Recovery screen (as shown below).


Although we could use this manual mount to pretty much restore any source to any target platform it would certainly take some manual configurations and probably a little time on our end to do so.  For this reason, Vembu has some built in restore workflows that we can use by clicking the ‘Restore’ icon showed above. Depending on the source of of our production data, whether it was  Hyper-V, VMware, or a physical server we will get some different options as it pertains to restoring our data.

vemburestore1 vemburestore2 vemburestore3

As we can see, Instant VM Recovery, File-Level Recovery, and Download are available no matter what the source platform was – with Live recovery options added to our VM based backups, and a disk/partition level recovery included with our physical machine backups.  To go through all the recovery options within all of the supported source types would be a very time consuming, and quite honestly, duplication of work as the wizards and steps used to perform each type of recovery are very similar – which is a good thing!  Instead, let’s take a look at some of the most popular recovery options and go through the motions.

Instant VM Recovery

Let’s first take a look at Instant VM Recovery within Vembu BDR, and for this we will do so inside of our Hyper-V environment.  Instant VM Recovery really lives up to it’s name as it provides us with a ready state VM which is a duplicate of our production in almost an instant fashion.    To do this Vembu uses a couple of different technologies depending on the platform the BDR server is running on.  For Windows based installs, Hyper-V is leveraged – For Linux, KVM is the technology of choice.   Basically Instant VM Recovery takes our deduplicated and compressed backup files and directly boots them on the selected hypervisor, essentially transporting our backup files into a VM.  Certainly performance of our VM will be impacted when running from these backup files, but this is more than made up for by the fact that we can instantly provide access to production resources, while we leverage something like Live Migrate or vMotion to move our newly recovered VM back to production storage!

To get started with this type of recovery simply select ‘Instant VM Recovery’ from within the ‘Recovery’ section of Vembu and  we are brought into the ‘Restore Version’ settings.  Here, we simply select the restore point, or restore version of which we would like to instantly recover to.


From the ‘Restore Data’ section, we are prompted to select the data (or in this case, the VM) that we wish to restore.


The ‘Restore Options’ step gives us a little flexibility on how we would like to recover our VM, and the options presented greatly depend on the ‘Instant VM Recovery’ mode we select and the platform our BDR server is installed on.  For instance on a Windows based BDR deployment, Hyper-V VMs can be instantly recovered in Hyper-V mode, whereas VMware VMs can be instantly recovered to either Hyper-V or VMware mode.   Depending on which Instant Recovery Mode we select, different options will present themselves.

  • Hyper-V – Instantly recovers VM on the Vembu appliance configured with the Hyper-V role
    • Startup RAM – the amount of memory granted to start the newly created virtual machine
    • Configure Network Details – Allows us to specify an IP Address and Subnet mask for our new VM
  • VMware – Instantly recovers VM on a specified VMware host.
    • Target VMware Server/Datastore – specifies where we want to run our new virtual machine
    • VM Name – name of our new VM.

Since we are using our Hyper-V environment, I’ve selected Hyper-V here and provided the necessary information…


Once clicking through the review and progress screens our VM will actually be created within the Hyper-V environment directly on our Vembu Appliance, utilizing no extra space as the data is coming from the already existing backup files.  As you can see below, I have a couple of Instant Recoveries which I’ve initiated.  What we are left with is a VM booted completely from our backup files, instantly recovered.  So in turn, we could instantly provide our end users access to the VM, and take our time migrating this VM back to a production environment.


File-Level Restore

The file level restore wizard (Recovery->Restore->File-Level Recovery) within Vembu works in a similar way to that of Instant VM Recovery, where we select our VM to recover, its associated restore point and finalize the wizard.  However unlike Instant VM Recovery, File-level restore doesn’t create a new VM within our environment – instead it takes the contents of the virtual or physical machine backup and mounts them to a local drive within our Vembu BDR server.  Shown below is file system on my Vembu BDR server – as you can see with a few extra drives.  G:, being the drive belonging to the machine from our physical server, and Y: being a drive belonging to our backed up virtual machine, as I’ve initiated File-Level recovery for both those servers.


Once these drives have been mounted we can simply access the backed up data and do as we please with it, whether it be copying back to our production workloads or to another spot of our choosing.  When we are done, we simply just need to go back into the BDR UI ‘Recovery’ section ‘Unmount’ our backups using the folder icon as shown below…


Once we have unmounted our backups we can see that the folder icon flips back to being closed.

Disk Level Recovery

Disk level recovery within Vembu BDR does just that – recovers individual disks, or partitions in the case of a physical server back to their original locations.  That means individual vmdks can be restored to a VM, or individual partitions back to a physical server or workstation.  Again, the wizard to perform a disk level recovery is much the same as all of the other recovery types, selecting a restore point and source, however differs only when selecting a target.  As shown below we can see the disk level recovery options for restoring a single VMware disk back to the original VM by specifying the target ESXi/vCenter, along with an associated VM and datastore.


After completing a Disk Level Restore a new VMware VMDK disk is created and attached to the VM you have chosen.


Finally we make our way to the last recovery option within Vembu BDR – the download option.  Download uses the Vembu Virtual Disk technology explained earlier to mount the backed up VM and allow us to pull down various images of the backup, including VMDK’s, VHDX, and Raw ISO.   That said, simply mounting our backup manually always selects the latest restore point whereas the download option allows us to specify any restore point on disk.  Again, this is a really cool feature that I don’t see in a lot of backup software – being able to essentially convert your VM from platform to platform by means of the backup is a neat idea.  Although there would certainly be some work to get everything working, it’s definitely a great starting point.


Failing over a replica

Now that we have got through all of our options as it pertains to restoring backups it’s time to look at one last restore technique that BDR provides – replication failover.  If you can remember back to part 1 of this review we went through the process of setting up a replication job within our vSphere environment, and replicated one of our VMs (MSDC) – now we will take a look at the process of failing over to that replica.   Replica failover within BDR is different than restoring a backup as we do not need to transfer any files or data – our backup is essentially a replica of the VM, sitting on a host in the native VMware format.  Now we could if we wish just simply power on this VM from within our vCenter environment, but by initiating the process through Vembu we gain a bit more in terms of functionality and option.

To begin failing over of a replica we need to navigate to VM Replication->Manage Replica’s.  The process can be kicked off by clicking on the ‘Restore’ button as shown below.


As shown below we have a few options as it pertains to managing our replicas; Failover, Finalize Failover and Finalize Failback.  Let’s first take a look at the at the Failover option.


The next few options in the wizard after selecting Failover are similar to what we have seen in almost all of the BDR wizards, selecting a restore point and selecting our restore source.  After we have done that and clicked the ‘Failover Now’ button the review section our failover process will initiate…


As we can see above our failover request has completed – which means we have successfully completed the failover to our replicated VM.  This means operations are essentially switched from our production VM to our replicated VM, with any network mapping or re-ip rules being applied to it.  This failover state however is not permanent and needs to be finalized in some sort of way.  By default, BDR performs a snapshot on the replicated VM before it becomes active in order to allow us to revert back to various pre-failover states – which means simply failing over becomes a temporary step that needs to be further finalized.  By clicking the same restore button on the Manage Replica’s screen, and this time selecting the ‘Finalize Failover’ option we are presented with the following options on the Finalize Type section of the wizard; Undo Failover, Permanent Failover, and Failback.


The ‘Undo Failover’ option will essentially undo any changes that we have made to the environment – meaning the production VM will once again become the active VM, and the replicated VM discards any changes that were made to it while it was in the temporary failover stage and reverts back to its original state.  This option is normally used if the source VM gets restored or becomes successfully active again.

The Permanent Failover option is basically the opposite of the Undo Failover – it commits our changes to the replicated VM and in essence makes the replicated VM the new source VM, permanently.  This option would be used if we are absolutely sure that our source VM is no longer recoverable and want to permanently run from our replicated VM.

Finally the Failback option gives us the ability to failback to our production site.  This process will recover our replicated VM, along with any changed data while it was in a failover state, back to our production site, either on the original host or another host we prefer.  Again, just selecting Failback doesn’t commit the failback, it leaves us in another temporary state that needs to be finalized.  The options for finalizing a failback are…

  • Permanent Failback – If we have failed back to production and we are happy with how things are working, this option commits our actions – our newly created production VM becomes our production VM and is automatically excluded from any current replication jobs
  • Undo Failback – just as with failover we have the option to undo our failback – if something happens during the failback or we find that the failed back VM is not performing as expected this option will revert our production workloads back to the replicated VM.

vSphere replication within BDR has many options available and I like how each step can essentially be reverted.  If you have engaged in failing over production VMs then you are most likely already in a disaster state – having the ability to undo both your failover and failback processes is certainly a nice thing to have during these times if something goes awry.


Wow – I think this is probably one of the longest reviews I’ve ever tackled, but the fact of the matter is I’ve only covered the core technologies which are included in the Vembu BDR Suite.  In addition to everything we have covered here Vembu provides backup and recovery for individual application objects such as SQL databases or Exchange emails, along with SaaS based protection for services like Office365 and Google Apps.  Vembu also provides integration into their cloud services, or even Amazon if you so please.  The point is, the Vembu BDR Suite is an all-encompassing product that can provide complete protection across your environment, be it on premises or off premises!

There is lots to like about Vembu BDR – First off I think of commonality –  Although BDR requires multiple different applications to make up the suite, all of the those combinations provide a similar UI – same colors, same look and feel, and same types of functionality.  Even within single applications I see commonality – With the exception of maybe one step, the screens you go through in order to complete a restore are pretty much the same, whether it be file-level, VM-level, disk-level, or even failing over a replica – all the processes have a similar feel to them.

Also, the BDR console, or the main UI is nice in the fact that it pulls in the information from all of the other applications – I can see the status of the backups and restores that I’ve made to my physical machines inside of the BDR interface, even though they were essentially setup from the ImageBackup application.  In all honesty I would love to see Vembu somehow port the process of deploying the ImageBackup application and it’s associated backup configuration directly into the BDR server – as then you would get a true one stop shop for managing those backup jobs, whether they be physical or virtual.  That said, at the very least we are able to report on all of our backup jobs from one UI – maybe we will see something like this in a future release.

Aside from commonality and overall management probably the most exciting piece of technology I found within the BDR suite is the HIVE file system and the associated Virtual Drive technology.  Being able to display the backup files out in formats such as vmdk, vhdx and ISO is a pretty nifty feature – but doing so in a matter of seconds, no matter what the source of the backup was is very impressive indeed!

With all that said I would most certainly recommend Vembu BDR suite, especially to those companies looking for a common interface to protect their VM backups, physical backups and SaaS backups all with one application.  From my experience the product has performed very well, completing backups and replications all within a decent time frame with some impressive performance results.  All this said, you don’t have to simply take my word for it – you can download the product yourself, and by default, all downloads come with a 1 month free trial so you can begin testing it out on your own terms.  Aside from the BDR Suite,  Vembu offers a wide variety of products such as a monitoring solution to centralize the monitoring of all of your backups as well as some which are absolutely free such as Desktop Image Backup and the Vembu Universal Explorer for discovering application items within Microsoft apps.   I mentioned before that I’ve not heard of Vembu before starting this review – and if you haven’t, you certainly should check them out.

Vembu BDR in the lab – Part 1 – Backup!

vembu-logoI’ve done quite a few reviews over the past five years or so on this blog, and for the most part I always had somewhat of an idea who the company was that I’ve been writing about – however this time things are a bit different!  I’ve never heard of Vembu before they gave me a shot to test out their flagship software, Vembu BDR Suite, which is why I was pleasantly surprised when I started doing a bit of research on the company!    Vembu was founded a way back in 2002, and even though their first products to market were created around SQL migrations their main focus then was backup, and has been now for over a decade!.  Being in the backup business for 10 years you would think I would’ve heard their name, however Vembu was a more “behind the scenes” product – firstly aimed at Managed Service Providers who at the time would white-label their software for resale – something I, or anyone within the community wouldn’t have noticed!  It’s that 10+ years of experience that shines through in their new release of the Vembu BDR Suite.  Now what surprised me the most was the shear number of products and supported platforms that Vembu BDR supports – Vembu can backup vSphere, Hyper-V, Physical Servers, Desktops, Exchange Items, SharePoint Items, SQL Items,  Office365, Google Apps, etc. – and they can back that up on-site, off-site, or even to the cloud, be it a Vembu cloud or Amazon – not to mention that most of this is done within a single UI – Impressive!  With all of this going on I was surprised about how little I’ve heard of them.  Either way I’ve seen lots around the community as of late from Vembu and was excited to get them in the lab and try out some of their tech!

Now, due to the sheer amount of supported platforms and options I’ve decided to break this review up into a couple of parts.  This part will focus solely on setting up our backups, be them physical, VMware and Hyper-V, along with explaining a little about Vembu’s Image Verification that ensures we have restoreable backups when the time comes that we need to use them.  The next part will explore what really matters when it comes to backup – and that’s restoring!  Vembu provides a lot of flexibility when it comes to restore, object-level, file-level, VM level, etc. – and really has a unique take on how they expose their backup files for restorations such as their HIVE file system and Virtual Drive mounts.

So with that all said let’s get on to the show….

Getting Vembu in the lab

We will quickly go over deployment and configuration as getting Vembu up and running is very, very simple.  Vembu provides a number of options to get started with the BDR Suite – think Virtual Appliance for both Hyper-V and VMware, Windows, Linux, etc.  Again, not only providing support for backing up multiple platform environments, but providing choice when it comes to what environment you want to run Vembu in.  – I’ve chose to go the Windows route but either installation method provides you with the same, easy to follow wizard-driven install.    Essentially, the Windows install simply prompts for a password for the MySQL server, a location for the mongoDB, the system/user account to run the Vembu BDR service under, a username and password for BDR itself, and a location to use for the default repository where BDR will store it’s backups.  Not a lot of information to have to provide for the amount of configuration the installer actually performs.

As far as configuration goes there isn’t a lot we need to do after installing Vembu BDR.  We could navigate to the Management menu and set some things up such as the time zone and SMTP settings for reports, however you can also just get into the nitty gritty – which is what I chose to do…

Backup Time

As mentioned above Vembu BDR supports both VMware and Hyper-V, along with physical servers, workstations, etc.  As well, we can process restorations on object level items such as MS Exchange emails and SQL databases, along with backing up SaaS based applications like Office365 and Google Apps.  It would be impossible to take a look at every one of these on one review, so for us we will focus on probably the three most popular platforms; VMware, Hyper-V, and Physical.

Backing up VMware vSphere VMs

In order to start backing up our VMware environment we will need to provide Vembu with some information in regards to either our standalone ESXi hosts or our vCenter Server.  By selecting ‘Add VMware vSphere Server’ (after going to Backup->VMware vSphere) we are able to start processing virtual machines which live on either our vCenter Server or a standalone ESXi host.  The process of adding our vSphere environment is quite simple and accomplished by simply entering in the hostname/ip of our vCenter Server/ESXi host and passing along some credentials – a pretty common process in any application which requires data from vCenter.


Once our server has been added we can immediately begin to back up our VMs.  Creating a job within BDR is done through the Backup menu (shown above) and then selecting the ‘Backup Now’ option next to the vCenter/ESXi server you wish to process – doing so will begin our 5 step process in creating our first vSphere backup job within Vembu BDR.


The first step we need to complete is telling BDR exactly what we wish to backup.  As you can see above, I simply selected the checkbox next to my desired VM and clicked ‘Next’.   The release of BDR 3.6 added the support of backing up complete clusters as well and if you wish to exclude and VMs (if selecting to backup a complete host/cluster) or VM disks this can be done by simply click the ‘VMs/Disk Exclusion’ button.   A pretty easy and self explanatory process….


As far as scheduling goes we have the option to select to run our backups hourly, daily, or weekly, and set our desired days/hours in which we would like to execute the job.  With the smallest option of running our backup jobs every 15 minutes you can be sure to prevent as much data loss as you can.  Here I’ve left the defaults as is, to run hourly on every day.


As we move into the retention configuration we start to see some more advanced functionality that BDR provides us with.    Before explaining all of our options its best to describe a little bit about how BDR stores its backups on disk.  During the first run of a backup job, BDR will take a full backup of our VM from the production storage – each subsequent run will leverage vSphere’s Changed Block Tracking and copy only the differences, or those blocks which have changed within the VM since the last backup and then store them in incremental files.   When the retention period is hit, BDR copies the oldest incremental restore point and injects it into the full backup file, releasing that space to be used as free capacity.  With all that said let’s take a look at the retention options available to us.

The default retention setting on this step, Basic Retention, essentially means we will keep 1 full backup and X amount of incremental backups on disk – so if we select 3 has our retention policy, we will be left with one full and two incremental backups, giving us a total of three points in time in which we can restore to.  By selecting Advanced Retention we can apply a more robust Grandfather-Father-Son (GFS) backup solution to our VMs.   GFS retentions within BDR allow us to merge our backups into additional full backups, ensuring we are covered when it comes to meeting compliance on restore points.  Think of situations such as having hourly incremental backups take place, while merging daily into a daily full, in-turn merging our daily fulls into weekly fulls, and so on and so forth for monthly and even yearly.  Essentially, GFS allows us to have the benefits of having small RPOs, while maintaining the assurance and compliance of having the number of daily/weekly/monthly restore points our business requires.

Aside from retention, we also see a couple of other settings on this step.

  • Application-Aware Options – BDR can utilize VMware tools to invoke the VSS writers within a VM in order to ensure that the backups are application consistent – it’s here where we setup options such as how to proceed if application processing fails, as well as specify which credentials to use and how to handle transaction logs (truncate or leave alone)
  • Additional Full Backups – BDR by nature creates one full backup along with many incremental backups following it.  Having a long chain of incremental backups may be ok in some environments, but enterprise organizations may want to have additional full backups performed periodically to sit in between the incremental backups.  Here we can setup full backups to be performed in addition to the incremental on  an hourly, daily, or weekly schedule.  We can also limit the number of full backups we want to keep on disk as well in order to preserve space and capacity on our repositories.


The Review Configuration step is just that – a step that allows us to specify a name for our backup job, as well as review the selected configuration we have made in the past three steps.  From here, we can simply click ‘Run the backup’ to execute our newly created job.


After committing our backup job we are now brought directly to the ‘Progress Details’ section.  No matter what schedule we provided, Vembu will always immediately begin the first full backup of the VMs specified.  Here we can see the associated tasks and events, as well as transfers and progress rates of our newly created job as it runs.  This progress screen isn’t cluttered with a bunch of statistics that we don’t necessarily need to see –  It’s a clean and simple UI that simply shows you a percentage complete, along with the total backup size, and currently running transfer rate – everything we would want to know about a job in progress.


Once our first job has been setup we can view details about it by navigating to Backup->List Jobs.  Here we can do many things such as suspend (disable), edit, or delete our jobs, as well as view their current running status.  Clicking the Reports icon will allow us to drill down into more detail about the last run of the job, showing us the size, status, and time taken to perform the job.

Essentially this is it for creating and running a backup job for vSphere within Vembu BDR, however there is one more vSphere protection strategy for VMware that Vembu provides – Replication.

Replicating VMware vSphere VMs

Selecting VM Replication from the top menu, then clicking the Replicate Now icon next to your desired vCenter server will introduce us to the Replication wizard.  The Chose VMs and Configure Scheduling options are the same as that of the backup process we went through earlier, allowing us to select our desired VM and setup an hourly/daily/weekly schedule to perform replication.  That said, things start to change a bit as we move into third step of the setup.


The Target Replication Host section is where we specify where we would like to host our replicated virtual machine.  Here we can select an existing vCenter/ESXi host or ‘Add DR VMware Server’ if we would like to add a new one.  Also specified is the datacenter and datastore where the replicated VM will reside.  Since this will be an exact copy of our VM we have the option to add a suffix to our VM as well in order to distinguish it from that of our production VM, as well as set our desired retention policy on the replicated VM.


Network Mapping allows us to map between our source and target networks.  Normally, when replicating to an offsite location we may have different network names within our vSphere environments that we attach our VMs to in order to gain network connectivity.  Network mapping allows us to configure a table of sorts that will map our source production networks to networks within our DR site, eliminating a mundane and time consuming step when it comes to actually failing over our replicas.


Re-IP Mapping allows us to specify some rules around what our replica IP address will be, supporting the situation where we may have different addressing schemes between the production and replication site.  Adding a Re-IP rule is as simple as clicking ‘Add Rule’ and specifying the source and target IP schemes, along with DNS and Gateway information.  Then, during failover, BDR will automatically apply these Re-IP rules to the replicas in order to ensure we have the proper connectivity during a disaster.


Again, just as with our backup job, once configured we will be brought to the same simple, clean progress screen outlining everything we need to know about the running process of our replication job.

Hyper-V Backup

Adding a Hyper-V server into Vembu is fairly straightforward and similar to that of adding a VMware host.  In terms of Hyper-V we can had either a standalone host, or an SMB host.  Either way, once added Vembu will push what they call ‘Integration Services’ to the hosts in order to handle the processing of VMs.  In order to do so, BDR will need some sort of administrative privileges, meaning you will have to ensure that the service in which Vembu runs, runs under an account with sufficient privileges to go out and install software on your Hyper-V hosts.    Once providing the proper credentials and hostname of your Hyper-V host, the integration services are deployed and the host will be available for you to backup inside of Vembu.   These integration services contain a number of technologies that are used to create the full backup as well as a CBT driver in order to track modified blocks for incremental backup.  If our disks are stored on an SMB host, we will need to deploy the Vembu Integration services here as well.


Similar to vSphere we can begin creating jobs directly after adding our Hyper-V host by clicking the ‘Backup Now’ button.


Just as we did in our vSphere backup we simply check the VMs we wish to include in this job, and use the ‘VMs/Disk Exclusion’ button to backup only certain VMs or disks within the VMs.













As we can see above, the backup options are identical to that as were in the VMware backups.  We have the option to set our schedules and apply different retention policies such as GFS, restore points, etc., as well as configure options related to Application Aware and additional full backups.


Again, after naming and creating our Hyper-V job we get a nice backup progress screen that auto refreshes throughout the backup process.  As we can see here we are getting some very nice performance with our Hyper-V backup job, processing over 500 MB/s!

Physical Machine Backup

Aside from just backing up Virtual Machines, Vembu BDR provides data protection for both physical servers and desktops as well.  As shown below we have a couple of options when we browse to ‘Backup->Physical Machine’;  Physical Images and Files and Applications


What each option does is actually quite different.  The ‘Physical Image’ option allows us to backup and process a complete physical machine and store it as an image file, while the ‘Files and Applications’ allows us to backup just those files or application item such as Exchange mailboxes, SQL databases, etc. that we prefer.  No matter which option is selected we are taken to a page within Vembu.com to download the respective client which will need to be installed on the physical machine we wish to protect.  Let’s go ahead and take a look at the physical image component of Vembu BDR.

The client/agent installation is quite simple; just requiring you to specify a globally unique ID that will allow you to identify the machine within Vembu BDR.  After that it takes care of pulling down all of the perquisites and required packages needed to run.  Once installed we can go ahead and run the Image Backup web console, which will takes us to a familiar UI, similar to that of Vembu BDR.  Keep in mind these clients are installed on the physical server we wish to backup, not the Vembu BDR server.


After logging in and setting a time zone we simply need to point our client to our desired Vembu BDR server.  Once connected we will be redirected directly into our backup job setup as shown below….


As we can see, the layout and UI of the wizard is exactly the same as that of the Vembu BDR server where we were setting up backup jobs for our hypervisors and VMs.  It’s nice to have this uniformity between the different components that make up the BDR suite.  Also as we can see above, before we are able to create our backup we will need to install the Vembu ImageBackup Disk image driver.   I asked Vembu why they opted to go the route of having a second install for the image driver rather than simply bundling it in.  Their answer has to do with reboot polices – rather than set a reboot policy and automatically reboot our production workloads, Vembu gives us the option to simply install first and reboot when it is appropriate for us.  Either way, after rebooting and re-authenticating to the client, the same wizard appears as below…


The first step, just as in a Virtual Machine backup is to chose our source – what we want to backup.  In terms of a hypervisor this includes VMs, however when dealing with physical machine image backups, this includes physical disks and partitions within the server itself.  As we can see above I’ve chosen to backup all of my partitions on my physical machine.


On the next step we once again see the familiar scheduling settings as we did within Virtual Machine backups.  Set our desired schedule, as well as our target backup server and click ‘Next’.


The last configurable step of the wizard allows us to specify the retention on the backups we take.  The same options that were available to us within Virtual Machine backups are also applicable to physical server backups as well – meaning we can setup GFS backups, Application Aware settings, as well as schedule additional full backups to be kept on disk aside from the set retention policies.


Once completed our initial full backup will start, and we will be left with a nice progress screen just as we have seen within the Virtual Machine backup.   The backup of a physical machine is pretty straightforward and easy but I would still like to see some way of deploying these clients out centrally from our BDR server, as well as setting up the initial backup jobs for them – maybe in a future release!   It should be noted though that although we do not centrally create these physical backups through BDR, we can indeed report on them –  As we can see below, the screenshot to the left is the reporting from ImageBackup locally on the physical server, with the corresponding screenshot on the right reporting on our physical backup amongst all other VM backup operations (Never mind the failures as I had some VMs powered down in my lab during certain times).  Also we can see that the performance and deduplication provided by ImageBackup is very good, taking only a handful of minutes to backup roughly 22 GB, and compressing down to 16GB.



Aside from simply setting up the backup job, the Image Backup UI contains some other useful information and configuration options as well.  Under configuration we can see a number of options.

  • User Management – allows us to create users and grant access to the Vembu ImageBackup UI
  • Backup Schedule Window – allows administrators to define certain times of certain days where backups should NOT run – thus guaranteeing that our backups will not impact our business during certain production hours.
  • Bandwidth Throttling – This can be used as another means to limit the impact of backups on our production networks.  Bandwidth throttling allows us to limit the network bandwidth to a certain amount of MB  that is consumed by backup jobs.  We can do this by always throttling or by throttling only between certain times of the day, with the option to include or exclude weekends when the production network may not be in use.

Image Verification

So now that we have setup multiple jobs, backing up both VMware and Hyper-V, as well as a physical machine the next logical step is to perform some restores!  That said, before we go into the restore process I wanted to talk a bit about what Vembu calls Image Verification.  What Image Verification does is ensure that before we go to perform a restore on a given backup our data will indeed be restorable, correct, and in-tact.  Vembu’s Image Verification processes in a tiered approach, attempting to detect a lot of factors that may cause  failure during a restore…

  1. Mount Check – A mount check basically takes our backed up VM and performs a mount to the BDR server.  This ensures that if we ever need to perform instant mount during a DR scenario that we will be successful.   We will talk more about this mount process in part 2 of this review.
  2. Boot Check – The last step of the verification where our backup is booted up within a Virtual Machine.  Once booted, Vembu takes a screenshot of the booted VM and stores it within the configuration – allowing us to get a visual “piece of mind” that our backups are restorable.
  3. Integrity Check – This is an optional step of the verification as it performs a chkdsk on our VMs which can take quite a bit of time.

Vembu’s deployment of Image Verification is not something that we need to schedule as it is in other backup products – instead, Vembu, by default automatically runs the verification process once a day.  Certainly this is a nice feature to have as nobody wants to go and restore a VM and find out that the backups themselves are corrupt!

Stay tuned for Part 2

So we have went over the backup of vSphere, Hyper-V and physical environments, as well as touched on how we perform items such as vSphere replication and Image Verification.  What I really liked about going through the process of each item was the uniformity – no matter what we were doing the wizards and configuration of jobs was very very similar – no need to learn new terminology and processes when switching source environments.  In our next post we will take a look at what really matter, recovery.  Going through all of the different restore types as well as replica failover.  For now, if you wanted to get started with your own Vembu deployment you can do so by downloading a 1 month free trial of the entire BDR suite!  Thanks for reading and stay tuned for part 2.

Nakivo Screenshot Verification – Getting to know your backups!

Picture yourself in this scenario – you walk into work on a Monday morning where you are promptly greeted by just about every IT staff member in the building.  They quickly tell you that certain mission critical services are down and the company is losing money as we speak.  But that’s OK, you are here, the infamous backup guy – the guy no one cares about unless things go down.  None the less you sit at your desk and begin the restore process.  “No problem”, you say, “services should be back up in 10 minutes or so…”.  The VM is fully restored and powered on – you sit, watching the console and the Windows spinning wheel and all of a sudden you see this!


Yikes!   Your backups, just like their production counterparts are corrupt – you try a different restore point, no go, they are all corrupt.  Believe it or not this is a common scenario that is played out inside of organizations everywhere.  Backups, just like any other production workload or service need to be tested in order to ensure that they are indeed restorable.  The best way of testing these backups is definitely done by performing full restores on the data – however doing so after each and every backup job can be quite time consuming and inefficient in terms of resource usage.

Enter Nakivo

nakivoNakivo does a great job at backing up and protection your VMware and Amazon environment and has a lot of features included within their product, Nakivo Backup and Replication.  I’ve previously reviewed the Nakivo product as a whole here if you’d like to check it out.  This post, will focus more on one feature – a feature that helps to prevent situations like the one above – Nakivo Screenshot verification.  There is no worse feeling than having terabytes of backups that prove to be unreliable and in turn, useless – which is the exact reason why Nakivo has developed Screenshot Verification inside of their Backup & Replication software – to give you piece of mind that when push comes to shove, your backups will indeed be restorable!

What is Screenshot Verification?

Screenshot verification is a simple concept with a lot of underlying technology at play – in its basic form, Nakivo will verify each VM backup after a new restore point is completed.  This is done by booting the VM directly from its’ corresponding deduplicated and compressed backup files located on a Nakivo backup repository, a process that Nakivo calls Flash VM Boot.  During a Flash VM boot Nakivo creates a new VM on a specified ESXi server.  It then takes the disks from within the backup files and exposes them as iSCSI targets, upon completion, the disks are mounted to the new VM as vRDMs.  A snapshot is created in order to provide disposal of any changes and the newly created VM is powered on, isolated from the production network.  Once booted, Nakivo utilizes VMware tools in order to take a screenshot of the booted OS.  After the screenshot is taken the newly created VM is discarded and backup files are brought back to a consistent state, and the screenshot is included within any job reports, either generated through the UI or emailed.

It’s this screenshot that gives you the “piece of mind” that when the time comes to restore your VMs, they will indeed be restorable!  A simple picture of the Windows login screen or Linux bash shell, or lack there of, certainly would’ve helped in the above scenario – alerting us that the next time we try and reboot our production VM or restore from a backup that problems may occur – giving us the leeway and time to fix the situation or restore to our last known good restore point on our own terms rather than doing so during an emergency.

How do we set this up?

As far as how to setup and configure Nakivo Backup and Replication as a whole I would recommend checking out my previous review here – but focusing solely on Screenshot Verification let’s go through the steps below…  **Note, we are setting this up for one of our backup jobs, however we can also enable screenshot verification for our replication jobs as well **

Screenshot verification, all be there a lot of moving parts underneath is actually a simply Enable/Disable feature within the backup job.  Nakivo has done a great job of abstracting away all of the complicated technology underneath and presenting us with some simple and easy to use configurable options.  On the last step of a job wizard, we see the Screenshot verification setting at the bottom of the first column (as shown below)…


Upon selecting ‘settings’ we are presented with some more options which we can configure.  The target container is the placeholder in which we will register the newly created VM that will be mounted to the backup files.  This can be your average vSphere object that VMs belong to such as a host or cluster.  Target datastore is where we would like to place the configuration files (vmx) of the VM that is created.  Verification Options allows us to do things such as limit the amount of VMs which will be verified simultaneously.  Running too many VM Screenshot Verification tests at once can produce a heavy load on your backup repositories, causing major delays in boot time depending on your hardware configuration – it’s best to tune this to your liking.    Also configurable here are things like RTO, which in this case defines the number of minutes that the VM has to fully boot and initialize VMware tools.  If this time is exceeded, the VM will be labeled as failed and the placeholder VM is discarded.  We can also set the delay between when the guest OS has booted, and the actual execution of the screenshot.


Honestly, this is all we need to do!  Simply save your job and on your next job run Screenshot verification should take place.  As shown below, we can see the events that take place within vCenter during a Screenshot verification test, along with the placeholder VM that is created in order to perform these tests, noting the creation and deletion of the VM, along with any required iSCSI setup.  This is all automated by Nakivo and requires no manual setup on your part.


So we have now seen that the Screenshot verification has been executed, but what does it look like in one of the reports/emails.  Right-clicking any job within Nakivo gives us the ability to run a few reports – the one we are most interested in now is the ‘Last run report’.  After generating and opening the ‘Last run report’ for our job with screenshot verification enabled we should see new information included in the report.  As shown below we see that we have a ‘Last verification’ row now, indicating whether or not that the screenshot verification was successful – in addition, we can also see the actual screenshot that was taken by Nakivo.  Below we see the actual login screen, giving us a pretty good indication that if we were to restore from this backup we would be successful.


Hey, Let’s have some fun!

As you can see, Screenshot verification is a very valuable tool giving us that piece of mind that our backups are actually restorable.  But where’s the fun in that right?  Let’s break some stuff and see how Screenshot verification reacts….

So, on my production VM let’s mimic some corruption and see if we can’t get Nakivo to detect it before we do!  In order to do this I’ve run the following commands on my production VM within an administrative console (***NOTE*** Don’t do this in production, please, please don’t do this in production Smile)

takeown /F C:\Windows\System32\WinLoad.exe

cacls C:\Windows\System32\WinLoad.exe /G administrator:F

del C:\Windows\System32\WinLoad.exe

bcdedit /set recoveryenabled No

The first three lines are pretty self explanatory, taking ownership, assigning rights, and deleting WinLoad.exe – the file that actually executes the loading of Windows upon boot.  The last line simply disables the automatic repair, Microsoft’s line of defense for preventing people from doing stupid things like this Smile  Anyways, we’ve essentially botched our server here, however we won’t notice until we do a reboot, something that probably doesn’t happen that frequently in a production environment – thus, it’s probably going to go unnoticed for quite some time – that is, unless we are utilizing Nakivo’s screenshot verification on our backup jobs.

nakivoss-log Let’s go ahead and run our backup job again on this same VM.  This time, we will see Nakivo report a failure on the backup job, specifying that screenshot verification has failed – upon further investigation, we can see below what appears on the console of our VM that used for the verification, and is exactly what would happen to our production VM if we were to reboot it!   Even though our newly created backup is not restorable, at least we now know and it won’t be a surprise to us in an emergency situation like the previous scenario.  This gives us time – time to come up with a plan, whether that be restoring from a known good backup, coming up with some sort of failover plan or even building a new server.


So in the end screenshot verification proves to be a very valuable tool in any backup administrators belt – whether that being knowing that your backups can be restored successfully, or sometimes even more important, knowing that they can’t – and in some cases, Screenshot verification can be leveraged to prevent production outages by getting a preview of things to come upon the next reboot!  The Flash VM Boot technology makes Screenshot verification a no-brainer in my opinion.  If you are using Nakivo, you should be enabling this on all of your mission critical VMs.  To learn more about Screenshot verification and other Nakivo features check out their help center here.  Fancy trying it for yourself?  You can get a full featured trial here, or if you are a VMUG member, VCP, or vExpert why not grab a free NFR license to tinker with!  If that isn’t enough options for you Nakivo also offers a fully featured free edition – yes, all of the same features of their premium paid versions, just limited to a couple VMs.  Thanks for reading!

Nakivo Instant Object Recovery for Microsoft Active Directory

nakivoNakivo, a backup company based out of Silicon Valley has been providing backup and replication software to the world since late 2012.  Today we will not focus so much and getting Nakivo up and running, we’ve already done that thoroughly here, but instead we will take a look at one individual feature; Instant Object Level Recovery for Microsoft Active Directory.  Let’s face it – mistakes happen – users get deleted, OU’s get wiped out, security groups get completely out of sync.  This is all stuff that happens, and happens more often than we know it.  Certainly performing a complete full restore of a domain controller can be a little bit over the top just to get one individual user back (depending on who it is I suppose Smile ), which is why Nakivo has been providing a means for restoring these individual Active Directory objects since their 5.5 release back in March of 2015.  Today we will take a more in-depth look at just how we perform these restorations.    Rather than simply showing how things are done I thought I’d have a little more fun with it this go around, put a little story behind it for all of our enjoyment 🙂  With that said, let’s dive in!

The Scenario

Let’s paint the scene – you are a sysadmin working for a pretty famous hockey club based out of Montreal.  You are using Nakivo to protect a couple of datacenters, one in Montreal and another in Brossard, with a fully virtualized Active Directory.    One morning for whatever reason your supervisor was a little off his game – maybe it was too much wine the night before, or perhaps he had a heaping of bad poutine at lunch, but when asked to disable and enable certain players directory accounts after a blockbuster trade, he had a slip up.  Sure, he disabled the “psubban” account of the outgoing player as he was asked to, however in the process of creating the new “swebber” account, somehow he ended up deleting Andrei Markov’s account (amarkov).

It wasn’t until Andrei showed up for practice that morning that anyone noticed – Andrei attempted to log in and quickly realized that something was up.   When the helpdesk ticket finally made its way to to your supervisors desk he knew immediately what had happened and quickly called upon you to help out.  “No worries”, you said, “We’re protecting that server with Nakivo!”

How can Nakivo help you?

Thankfully you had already setup a backup job which processes a domain controller belonging to the canadiens.local domain, the same domain the user was accidentally deleted from.  We won’t go into the nitty-gritty details of how to setup the backup job here, as this post focuses solely on the recovery, but we have covered it in detail in another post if you’d like to check it out.  But instead we’ll go through the steps for us to restore Andrei’s account – The first thing we need to do is fire up his browser and log into Nakivo Backup and Replication.  After logging into the application, simply selecting ‘Microsoft Active Directory objects’ under the ‘Recover’ menu kicks off the process (shown below).


The next step is quite simply and pretty self explanatory – we simply need to select the backup of our domain controller, in our case its named MSDC, and then select a desired restore point to restore from.  As shown below we also have the option to ‘Automatically locate application databases’, which is checked by default.  If we happened to know the exact location of the MS AD database then we could uncheck this an specify the path, and in turn maybe save a little time as Nakivo wouldn’t need to scan for the ntis.dit file.  Honestly though, the amount of time it takes Nakivo to locate the Active Directory database is trivial, so let’s leave this checked, and click ‘Next’.


Nakivo will now take a moment to load the desired restore point and display it to us.  The amount of time this takes greatly depends on the size of your Active Directory infrastructure.  Canadiens.local is relatively small, and took only a few seconds to load – but before we move on to the next step it’s good to go over what is happening behind the scenes here.  Nakivo Backup & Replication is actually scanning and mounting the server directly from within the compressed and deduplicated backup file – at no time does it perform a full recovery of the VM itself, saving us valuable time as we only need to restore that one individual object.  As shown below we are presented with a screen on which we can browse through the entire Active Directory infrastructure and find the object we’d like to restore.  It should be noted here that Nakivo supports object-level recovery for not just users, but containers and groups as well – so if it was an Organization Unit or Security Group that was deleted we would be able to restore it in the same manner.   Next we select the object by simply clicking the checkbox beside it, and then click ‘Download Selected’.  Alternatively we could click ‘Forward Selected’ to have Nakivo email out the ldif files to be used for import.    At this point we will have a couple or Recovery settings we can specify; User will be disabled – will restore the user with the account disabled or User must change password at next logon – Nakivo automatically generates a new password for the restored user, and sets the ‘Change password on next logon’ flag in AD.  Any password Nakivo generates will be stored in an included ‘Passwords.txt’ file added to our download.


After downloading the recovery bundle (should come in a .zip format) we can now get started on restoring Andrei Markov’s account back into the canadiens.local domain.  We does this by first extracting the bundle and copying the extracted folder back to his domain controller.  Since we are importing a user object back into Active Directory we need to have ldaps, or certification services enabled and configured on the domain controller.  Thankfully the canadiens.local domain is already setup this way, however if we need to implement ldaps there is a great post here on how to go about it.  Once we are back on the domain controller console we can simply open up an administrative command prompt and run the following command…

ldifde –I –t 636 –f filename –j logfolder <- where filename is the path the the downloaded ldif from Nakivo and logfolder is a path for import logs to be placed.

We can see a screenshot below of the before and after shots of the canadiens.local domain, with the after showing that Andrei Markov’s account has indeed been restored.

nakivoad-before nakivoad-after

With that you can now breathe easy as Andrei’s account is fully restored back into Active Directory, including all of his user attributes, group memberships, etc.  Honestly, it’s as if it was never deleted!  This whole process moves very quickly within Nakivo, honestly, within minutes – and when the time comes where you need to do a restore, especially one revolving around user access, time is most certainly of the essence.  Nakivo could certainly shave even more time off this process by implementing some way to automate the ldif import, or import directly back into the production VM – but honestly, the simplicity of this whole process far outshines the fact that it needs to be manually imported.  For now, you and your supervisor can get back to what matters most; the quest for Lord Stanley.

If you would like to learn more about Nakivo’s Instant Object Recovery for Active Directory or any other feature they offer I highly recommend checking out their help center here, where you can find items such as their knowledge base, release notes, and a very well written user guide.  Also if you want to check it out for yourself you can get a full featured trial here, or if you are a VMUG member, VCP, or vExpert why not grab a free NFR license to tinker with!  If that isn’t enough options for you Nakivo also offers a fully featured free edition – yes, all of the same features of their premium paid versions, just limited to a couple VMs.  Thanks for reading!

Cohesity 3.0 – One platform for all your secondary storage!

logo-cohesity-darkAfter just over half a year of making their 1.0 product generally available Cohesity, a company based out of Santa Clara have announced version 3.0 of their flagship secondary storage products DataProtect and DataPlatform.  I had the chance to take a 1:1 briefing with Cohesity to check out what’s new and find out just what they define secondary storage as and thought I’d try and share my thoughts around the new features and overall solution from Cohesity here…

What is secondary storage?

Before we get too in-depth around the features and benefits of the Cohesity platforms its nice to stop and take a look at just what secondary storage is.  Quite simply, Cohesity sees secondary storage as any storage hosting data that isn’t “mission critical”, and surprisingly they are also discovering that this non “mission critical” data takes up the majority of an organizations overall capacity.  As show below we can see that data such as backups, test/dev, file shares, etc.…  These all fit into the secondary storage profile – data that is rarely used, fragmented and complex to manage, data that Cohesity defines as “Dark Data”


All of this “Dark Data” can become a bit of a challenge to manage and maintain – We end up with numerous backups that we don’t touch, we have many appliances and servers within our datacenter performing various functions such as deduplication, compression, analytics, etc.  All of these moving pieces within our datacenter each come with their own cost, their own hardware footprint, and for the most part have no way of interfacing with each other, nor do they have the ability to scale all together.  This is where Cohesity makes it’s play – simplifying secondary storage within your datacenter

Cohesity – All your secondary storage – One Hyperconverged platform

Cohesity moves into the datacenter and aims to eliminate all of those secondary storage silos.  They do this by consolidating your backups, file shares, test/dev copies, etc. and moving them all on to a Cohesity appliance.  To get the data there, Cohesity first leverages their DataProtect platform.  DataProtect provides the means of backup, using seamless integration into your vSphere environment Cohesity starts performing the role of your backup infrastructure.  Utilizing user create polices based on SLA requirements, Cohesity begins on loading your backup data, adhering to specified RPOs, retention policies etc.  From there DataProtect also adds the ability to offload to cloud for archival purposes. Think in terms of offloading certain restore points or aged backup files to Amazon, Azure, or Google.   Once the data resides on a Cohesity appliance a number of benefits are presented to their customers; think analytics, being able to get a Google-like search throughout all of your secondary data, looking for pre-defined templates such as social security numbers or credit card numbers.  DataPlatform also provides the ability to leverage copy data management to quickly spark up exact, isolated copies of our production environment directly on the Cohesity appliance.  This allows for things such as patch management testing, application testing, or development environments to be deployed in a matter of minutes utilizing flash-accelerated technologies on the appliance itself.


Integrating all of these services into one common platform for sure has its benefits – lowering TCO for one, not having to pony up for support and licensing for 4 different platforms is the first thing that comes to mind.  But beyond that it provides savings in terms of OpEx as well – no more do we have to learn how to operate and configure different pieces of software within our environment dealing with our secondary storage.  No more do we have to spend the time copying data between solutions in order to perform various functions and analytics on it.  We can just use one appliance to do it all, scaling as we need by adding nodes into the cluster, and in turn, receiving more compute, memory, and storage capacity, thus increasing performance of the secondary storage environment overall.

So what’s new in 3.0?

As I mentioned before this is Cohesity’s third release in just over half a year.   We saw 1.0 GA in October of 2015, 2.0 not long after that added replication, cloning and SMB support in February of this year, and now we have 3.0 hitting the shelves with the following improvements and features…

  • Physical Windows/Linux Support – perhaps the biggest feature within 3.0 is the ability to now protect our physical Windows and Linux servers with DataProtect.  The same policy based engine can now process those physical servers we have in our environment and allow us to leverage all of the analytics and search capabilities on the data that we have always had.
  • VMware SQL/Exchange/SharePoint Support – As we all know in the world of IT it’s really the application that matters.  3.0 provides the ability to perform application aware backups on our virtualized SQL, Exchange, and SharePoint servers in order to ensure we are getting consistent and reliable backups, which can be restored to any point-in-time, or restoration of individual application objects as well.  3.0 also adds the ability to provide source-side deduplication for these application-aware backups, meaning only unique blocks of data are transferred into the Cohesity platform during a database backup.
  • Search and recovery from Cloud – 3.0 also brings us the ability to perform search capabilities on our data that has been archived to cloud, but more importantly, perform granular object level recovery on that cloud archived data as well.  Meaning the cost of moving data out of the cloud should decrease as we are just moving the data we need.
  • Performance Enhancements – Utilizing a technology based upon parallel ingest, Cohesity can now spread the load of ingesting individual VMs across all the nodes within its’ cluster – resulting in not only a capacity increase when you scale, but also a performance increase.  Also, they have done much work around their file access services, basically doubling the amount of IOPs and throughput.

And to top it all off, Best of VMworld


A huge congrats to Cohesity on the announcement revolving around 3.0 but an even huger congrats goes out for the “Best of VMworld 2016” within the Data Protection Category!  If you want to learn more I definitely recommend checking out Cohesity’s  site here, or, if you happen to be at VMworld you have a couple more days to drop in and say Hi at booth #827!

Rubrik Firefly – Now with physical, edge, and moar cloud!

Rubrik LogoRubrik, the Palo Alto based company who strives to simplify data protection within the enterprise has recently announced a series C worth a cool 61 million, doubling their total capital to a cool 112 million since founding just over a couple of years ago!  And as much as I love to hear about venture capital and money and whatnot I’m much more into the tech as I’m sure my readers are as well!  With that, alongside that Series C announcement comes a new release of their product, dubbed Rubrik Firefly!

Rubrik Firefly – A Cloud Data Management Platform

With this third major release from Rubrik comes a bit of a rebrand if you will – a cloud data management platform.  Nearly all organizations today have some sort of cloud play in their business; whether that be to build out a private cloud and support legacy applications or consume public cloud resources for cloud native applications – they all have some kind of initiative within their business that aligns with cloud.  The problem Rubrik sees here is that the data management and data protection solutions running within those business simply don’t scale to match what the cloud offers.  Simply put,  customers need to be able to manage, secure, and protect their data no matter where it sits – onsite, offsite, cloud, no matter what stage of cloud they are at – thus spawning the Cloud Data Management Platform

Rubrik Firefly Cloud Data Management

So what’s new?

Aside from a number of improvements and enhancements Rubrik Firefly brings a few big new features to the table; Physical Workloads, Edge Environments, and spanning across clouds.  Let’s take a look at each in turn…

Physical Workloads

I had a chance to see Rubrik a way back at Virtualization Field Day 5 where we got a sneak peek at their roadmap – at the time they supported vSphere only and had no immediate plans for physical workloads.  The next time they showed up at Tech Field Day 10 they actually had a bit of a tech preview of their support for physical MSSQL support – and today that has become a reality.  As you can see they are moving very fast with development of some of these features!   Rubrik Firefly adds official support for those physical SQL servers that you have in your environment, you know, the ones that take up so much resources that the DBA’s just will not let you virtualize.  Rubrik can now back these up in an automated, forever incremental fashion and give you same easy of use, efficiency, and policy based environment that you have within your virtual workload backups.  Firefly does this by deploying a lightweight Windows service, the Rubrik Connector Service onto your SQL server, allowing you to perform point in time restores and log processing through the same UI you’ve come to know with Rubrik.   Aside from deploying the service everything else is exactly the same – we still have SLA policy engine, SLA domains, etc.

And they don’t stop at just SQL!  Rubrik Firefly offers the same type of support for those physical Linux workloads you have lying around.  Linux is connected into Rubrik through  an rpm package, allowing for ease of deployment – From there Rubrik pulls in a list of files and directories on the machine, and again, provides the same policy based approach as to what to back up, when to back it up, and where to store it!

Both the SQL msi installer and the Linux rpm packaged are fingerprinted to the Rubrik cluster that creates them – allowing you to ensure you are only processing backups from the boxes you allow.

Edge Support

Although Rubrik is shipped as a physical appliance we all know that this is a software based world – and that doesn’t change with Rubrik.  The real value in Rubrik is the way the software works!  Rubrik has taken their software and bundled it up into a virtual appliance aimed for Remote/Branch Offices.  What this does is allow those enterprises with remote or branch offices to deploy a Rubrik instance at each location, all talking back to the mothership if you will at the main office.    This allows for the same policy based approach to be applied to those workloads running at the remote locations, thus allowing things such as replication back to the main office, archive to cloud, etc to be performed on the edge of the business along with at the main office.  The Virtual Appliance is bundled as an ova and sold on a “# of VMs” protected basis – so if you have only a handful of VMs to protect you aren’t paying through the nose to get that protection.

Cloud Spanning

Finally we come to cloud spanning.  Rubrik has always supported AWS as a target for archiving backups and brought us an easy to use efficient way of getting just the pieces of data we need back from AWS – but, we all know that Microsoft has been pushing Azure quite heavily as of late handing out lots and lots of credits!  You can now take those spare credits and put them to good use as Firefly brings in support for Azure blob storage!  The same searching and indexing technology that Rubrik has for Amazon can now be applied to Azure as well, giving customers options as to where they archive their data!

Bonus Feature – Erasure Coding

How about one more?  With the Firefly release Rubrik now utilizes erasure coding, bringing in a number of performance and capacity enhancements to their customers with a simple software upgrade!  Without putting hard numbers to it customers can expect to see a big increase in their free capacity once they perform the non-disruptive switch over to erasure coding!

Firefly seems like a great step towards the cloud data management platform – a topology agnostic approach to wrapping policy around your data, no matter where it is, ensuring it’s protected and secured!  The release of a Virtual Appliance perks my ears up as well – although it’s aimed directly at ROBO deployments now who knows where it might go in the future – perhaps we will see a software-only release of Rubrik someday?!?   If you are interested in learning more Rubrik has a ton of resources on their site – I encourage you to check them out for yourself.  Congratulations Rubrik on the Series C and the new release!

Nakivo – Backup and Replication for your VMs – A review!

nakivoLet’s face it – backup software is not the most exciting thing for a CIO in today’s world.  I mean, 99% of the time it sits idle, backing things up, spewing out reports – for the most part its somewhat of money sinkhole in an environment – but when push comes to shove and someone has deleted that important email, or that mission critical server fails – when a recovery or restore option takes place a piece of backup software can make or break a business!   Whether you are a simple SMB or a large enterprise backup could almost be classified as one of the most important things to your organization – so it has to be easy, intuitive, and reliable!   Nakivo, with their flagship Backup & Replication has taken that exact approach when developing their software!  Nakivo, headquartered in the infamous Silicon Valley was founded just in 2012 and after 4 fast-moving years have just released version 6.1 of their product.  This is one piece of software I have been hearing a lot about, but never had the chance to check out.  With that said I grabbed an NFR key from them and put it in the lab – and here are my thoughts.

Disclaimer: This review is sponsored, meaning I did receive compensation in some sort of form for writing this! That said, as always, any review I post on my site is solely my words and my opinion and in no way was modified or changed by the vendor!



Before we dive directly into the installation its best to first explain a little around Nakivo’s architecture.  Nakivo is really broken down into three main components; a Director, a Transporter, and a Backup Repository.

image2014-12-25 7-35-33

The Director

We can think of the Director as somewhat of a management plane for Nakivo – providing the user interface we log into and maintaining lists of our virtual infrastructure.  It also handles the creation, configuration, and scheduling of our backup job environment.  We only need one instance of the Director as it can handle multiple vCenters and standalone ESXi hosts.

The Transporter

The next component, the Transporter is our heavy lifter.  The Transporter is the data mover per say, performing all of the backup, replication and recovery operations as it receives its respective instructions from the Director.  The transporter also handles features such as compression, encryption and deduplication.  When we install a Director we will automatically get one “Onboard Transporter” installed on the same machine by default which cannot be removed.  That said, as we find we are processing simultaneous VMs and processes at once we can scale our backup environment by adding additional standalone transporters to help with the lifting!  As we do so, we also get network acceleration and encryption between transporters as data is passed back and forth.  Finally we have the Backup Repository.

The Backup Repository

This one is pretty self explanatory in the backup world.  It’s a container or pool of storage to hold our backups.  This can be a CIFS share or simply any local folder or storage attached to a transporter.  Again, when we initially install our Director we also get an “Onboard Backup Repository” to use by default.


imageAlright, with a little background knowledge behind us it’s time to get Nakivo deployed and wow, talk about some options!!!!  Deploying Nakivo Backup & Replication should satisfy just about every environment out there!  If you are primarily a Windows shop, simply use the Windows installer – Does your environment mainly consist of Linux-based distributions – hey, simply install the Linux package!  Or, do you prefer the ease of simply deploying appliances – they have you covered there as well with ova based virtual appliances!    Keep in mind that it doesn’t matter which installation method you chose – in the end you are left with the same product.  For the sake of this review I’ve chosen what I think might be the most common installation method – the Windows-based install.

So on with the install!  I’ve chosen the “Full solution” option as my installation type – meaning I will get an all-in-one install of a Director, Transporter and Backup Repository on the same machine!  Certainly this might not be ideal for a production environment, but suffices in the case of my lab.  As you can also see , the first screen allows me to specify where exactly I’d like to create the repository as well.

One click later…


Wait what!?!?!  Yeah – one click!  One click and we are done with the Windows installation of Nakivo Backup & Replication! As for the other installation types they are just as easy – Linux requires the execution of a single command, and we all know how simple deploying a virtual appliance is!  If you are looking to protect an Amazon instance, a simple link to a deployable AMI is provided as well!


Time to start configuration the product now!  Just a note, I really dig the earth/space image that is displayed by default in the UI.  It’s kind of a nice break from the standard box type login screens you see in most products.


Upon first launching Nakivo you will be prompted to set up a username and password.   After doing so you will be brought into their Configuration wizard and as you can see below only they only require three types of information; Inventory, Transporters and Repositories – This wizard, along with many others within Nakivo are short and to the point – and clearly make sense in the simplest terms – think, What to back up, how to move it, and where to put it – easy right?


As far as Inventory and VMware goes we just need to point Nakivo to our vCenter Server and provide it some proper credentials – from there the product goes out and discovers our inventory and allows us to add it into Nakivo Backup & Replication.


The Transporter section allows us to add/import any existing Transporters we may have already installed in our environment – be them on vSphere or Amazon AWS if we chose to do so.  As we mentioned earlier this review will simply use the “Onboard transporter” that is installed by default.


Lastly we can set up any Backup Repositories we want to have within our backup environment – again, I’m sticking with the default “Onboard repository” we setup during the installation, but if need be we can create new or import existing repositories into Nakivo during this step.

Once we are done we are brought into the Nakivo Management UI where we can begin creating jobs and backing up our environment – but before we go to far there are some other configurable options we change that weren’t included in the initial bare-bones wizard.


I’m not going to go through all of the configurable options but I’ll highlight a few common settings normally setup within environments as well as some very “nice to have’s” that Nakivo includes…

  • General->Email Settings – here we set up our SMTP options in order to have Nakivo send out alerts and reports.
  • General->Branding Settings – as mentioned earlier we have complete control over modifying the look and feel of Nakivo, uploading our own logos and backgrounds as well as support and contact information
  • General->System Settings – This allows us to specify how long we store job history and system events, as well as setup any regional options we prefer such as week start days, etc.
  • Inventory – Here we can add multiple vCenter/ESXi hosts as well as AWS environments
  • Transports/Repositories – Again, this is where we can manage or add any new transports or repositories to the system.
  • Licensing – Handles the changing of licenses for the product.


So on to the job setup

imageNow that we have Nakivo configured it’s time to start creating some jobs and see just how the product performs.  From the main dashboard we can do this by simply clicking the Create button.  As you can see to the left we have a variety of different jobs we can create, and depending on what you have set up within your inventory some may be unavailable to us.  For instance I don’t have an Amazon account attached to my instance of Nakivo so I’m unable to create a job to back up or replicate EC2 VMs.  That said we did add our vCenter into our Inventory so let’s go ahead and select ‘VMware vSphere backup job’ to get started…


As you can see above, the vSphere backup job creation is again in a wizard type format, firstly requiring us to select just what VMs we would like to process with this job.  We do this by either browsing through the inventory presented, or filtering with the search box provided, then checking the box next to our VMs we’d like to back up.  We can also select parent objects here as well, such a host, cluster, or vCenter, which would in turn backup all VMs residing within the parent.  This is useful in the event you want to capture and newly created VMs in the environment without having to modify existing jobs every time.  If selecting multiple VMs during this stage you can drag them around within the right hand pane in order to set priority preferences for processing – ensuring certain VMs are backed up before others, for now I’ve selected just my Scoreboard VM.


The second step deals with repository selection – we’ve already selected what we want to back up, now it’s time to say where to back up to.  Selecting ‘Advanced’ and expanding out our VMs we can see that we can globally select a repository for the job, yet perform overrides as well on a per-vm per-disk basis –giving us the granularity to place certain VM disks on certain repositories if we chose to do so.


Thirdly we setup the job schedule, with shortcuts to all days, workdays, weekends etc which can change depending on our regional settings we have setup within the system.


Lastly we setup our job options.  It is here where we give the job a name, select our retention cycles for the job and execute any pre/post job scripts we might want to kick off – all of the standard features you expect from a backup solution – but there are some additional options available here as well we should have a look at…

  • App-aware mode – instructs VMware tools to quiese the VM before backing up, allowing applications to ensure they are in a consistent state.
  • Change Tracking – This is a common feature provided by VMware that allows backup application to process just those blocks that have changed since previous backups, speeding up the time it takes to create an incremental backup.  Here we can select to use either the VMware version (preferred) or Nakivo’s proprietary version (available if no other CBT exists).
  • Network Acceleration –  if backing up over a WAN or slow LAN links this option will leverage compression and other reduction techniques to speed up data transfer
  • Encryption – this option will encrypt data that flows between transports.  Since we have only one transporter, this option is not available to us.
  • Screenshot Verification – This option will use a Nakivo technology called Flash VM Boot (we will cover this later) that will automatically recover our backups in an isolated manner and take a screenshot of the VM for inclusion in the Job Reports and notifications.
  • Recovery Points – here we can specify how many daily, weekly, monthly, and yearly recovery points we would like to maintain.
  • Data Transfer – Allows us to specify how Nakivo gets to the source data (Hot Add – mounts VM disks to the transporters, SAN – retrieves data directly from a FC or iSCSI SAN lun, or LAN – network access to the data).  We can also specify which transporters we would like to use for the job here if we had multiple transporters on different networks, clusters, etc.



After clicking ‘Finish’ we can now see that our ‘Run Job’ tab in the dashboard is active and displays our newly created job.    As we can see above our new job is indeed running, with the status being updated in the Job Info section of the dashboard.  I really like the way Nakivo has displayed this data.  We can see everything we need to know about any given job, as well as it’s run status, resource usage on any transporters its utilizing and the events and job status on all on dashboard.  When the initialization of the job is complete, the UI switches to different view showing the speed and data transferred – A very intuitive design for a UI.  The only thing I’d love to see here is the ability to break this information out into another window without having to open a new tab.

But it’s Nakivo Backup AND REPLICATION

Now that we have successfully backed up our Scoreboard VM its time to have a look at replication.  The process for creating a replication job is similar to that of a backup, simply click ‘Create’, and select ‘VMware vSphere Replication Job’.  Again, we are presented with a similar 4 step wizard.  Step 1 we select which VMs we wish to replicate, again with the options of selecting parent containers.


Step 2, as shown above, presents us with some different options than that of a backup.  Since we are replicating VMs, they will be stored in their native VMware format, therefore, instead of selecting a repository as a target we need to select another ESXi host.  As you can see above I’ve simply selected to replicate my ConcessionStandPOS VM from it’s location in Montreal, to another ESXi host located in Brossard for DR purposes.  Again, Step 3 allows us to create a schedule for the replication to occur with the exact same options as that of a backup job.


Step 4, shown above, is similar to that of the backup job options with a few added options.  We still have the ability to select our transporters and transport mode, as well as set recovery point retention settings, and finally perform the Screenshot verification as well,  however we have a few new options to configure outlined below

  • Replica Names – append/prepend a string to our VM name for the replica, or allows us to specify individual names on a per-vm basis
  • Replica Disks – Allows to specify to maintain consistency in terms of disk type for the replica, or specify that replicas are only stored thin-provisioned.

Once we click Finish again we will see our newly created job on our dashboard.  One item of interest here is that by default Nakivo doesn’t group our jobs, meaning backup and replication jobs are intermixed together.  They are distinguishable by the small icon next to them but if you want to further distinguish between the two visually we can click ‘Create’ and then ‘Job Group’.  This essentially creates a folder that we can drag and drop our jobs in and out of, allowing us to create a Backup Job Group and a Replication Job Group.  Job Groups also allow us to perform bulk operations on all jobs within that group, such as starting, stopping, disabling and enabling, etc…

When it really matters…

imageWe can do all of the backing up and replicating we want, but when push comes to shove we all know that it’s the recovery that matters most!  All recovery within Nakivo is done on the ‘Recover’ menu in the main dashboard.  As you can see to the left we have a variety of options when it comes to recovery in Nakivo, with each explained below…

Individual Files

This allows us to recover individual files from a VM backup within Nakivo.  After selecting our backup and then a desired restore point or point in time to restore to, Nakivo will mount the deduplicated, compressed backup file to its Director interface.  In the end we are presented with file browse dialog box, allowing us to select individual files, folders, partitions, and drives.   From there we have the options of either downloading these files directly to our Nakivo server, or whatever client you happen to be running the Nakivo UI on, or forwarding them via email.

Microsoft Active Directory Objects

user_iconActive Directory objects are treated somewhat the same as a file level recovery.  The backups are mounted in their compressed and deduplicated state to the Nakivo server.  From there you can browse or search for individual objects and recover them directly to your client machine.  The AD objects are downloaded in a .LDIF format, which allows for easy importing directly back into Active Directory.

Microsoft Exchange Objects

emailSimilar to that of Active Directory objects Nakivo can restore Microsoft Exchange items as well.  With this, we have the ability to search for and recover items such as emails, folders, mailboxes, etc.  The items are downloaded to the client machine, or alternatively forwarded via email to an address of your choosing.

VMs from backup

2185EN_01_02If you need to restore an entire VM this is the option you would most likely chose.  Nakivo allows you to restore a complete VM from a backup file – at which point it extracts the data from the deduplicated, compressed backup file and re-registers a VM on a host of your chosing, either preserving the VMs UUID, or creating a new one.  Just as in replication, we are able to restore the VM to it’s original disk type, or force it to be thin-provisioned.  We can also specify whether we would like our recovered VMs powered on, and whether or not we would like to change or preserve the MAC address on the recovered VM.

VMs from replica

vm-replicaFailing over to a replica within Nakivo is a very easy process.  Essentially you simply select which VM you would like to fail-over, select a time frame in which you want to fail-over to and run the job – after that, Nakivo simply places the replica in the correct point in time snapshot and powers it on.  When completed you are left with an absolute copy of your VM, recovered almost immediately.

Flash VM Boot

image2015-1-15 9-52-14Flash VM boot is a technology that allows us to power on our VM backups directly from their compressed and deduplicated state.  Rather than taking the time to restore the data as we did in the ‘VMs from backup’ scenario we can simply boot a VM directly from its backup files.  Nakivo does this by first creating a new VM on a target ESXi host, then exposing the VMs disks within the backup as iSCSI targets, and mounting them directly to the newly created VM as a virtual RDM.  Before any mounting though a snapshot is created, which will essentially redirect any changes that may take place during the flash VM boot, providing a means of discarding them later in order to preserve the integrity of the backups.  This is the technology that enables the ‘Screenshot verification’ options within the backup jobs, allowing us to ensure that our backups will indeed boot up when it really matters.   Once the VMs have booted you can permanently recover them by utilizing VMware Storage vMotion to move and migrate the RDMs to vmdks, or, subsequently if you aren’t licensed for svmotion you can create a new replication job within Nakivo to replicate the VM to another host.

So whats the verdict?

Nakivo is certainly a very easy product to use and get used to – having the management interface run through a web browser is certainly an advantage – being able to launch to management interface from any workstation without installing a client!  Also, the UI is very intuitive and very clean, which is surprising because they cram a lot of information into those screens – but everything is super easy to find.  Creating backup and replication jobs is a breeze, simply launching 4 step wizards from start to finish!  As for performance I can’t complain either, all of my jobs finished in a timely manner, mind you my test VMs are quite small with very little change rate, but needless to say performance was fine.  Nakivo is architected in a way that is simple to get up and running very quickly, yet, also simple to scale with a growing environment by adding more transporters and repositories.  I really like the options you have when deploying Nakivo, be it physical or virtual or cloud, Windows, Linux, virtual appliance or even on a NAS such as Synology– Nakivo leaves the choice to you!   The deduplication technology is outstanding – and coupled with the compression they offer you can be sure that you are using as little capacity as needed and not storing redundant data or wasting space.  I would however like to see the product expanded in the future to include a couple of features that I couldn’t find – Firstly it would be nice to see Nakivo bake in the ability to restore individual files and application items directly back into their source VMs without having to download locally, as well, even though I don’t use it, Hyper-V support seems to always come last on backup vendors lists – hopefully we see this supported sometime soon as well.  I should mention that even though this review focused solely on VMware, Nakivo is fully supported to protect your instances an Amazon as well – giving you a feature rich backup and replication options to move data between regions without utilizing snapshots. Also, there are a slew of multi-tenancy options that I didn’t have time to explore, as well as the ability to perform copies of your backups offsite or to the cloud.   As far as licencing goes Nakivo is licensed on a per-socket basis, and honestly, starting at $199/socket for VMware, and $49/month for AWS you are going to be hard up to find a product with all of these features at a lower price point!

With all this said would I recommend Nakivo – certainly!   It’s easy, intuitive, it performs and its priced right!  But as always, don’t necessarily take my word for it!  If you want to try out Nakivo for yourself you can – If you are  VMUG member, vExpert, VCP, VSP, VTSP, or VCI you can get your hands on a free, full-featured two-socket NFR key yourself!  Nakivo also offers a full featured trial edition for 14 days to try the product out!  Still not enough for you?  Nakivo has a free edition – you can back up 2 VMs, performing all of the features above, for free, forever!  Again – options!!  An no excuse not to try it out!

Want to learn more about Nakivo

Check out some of these great resources!

As well as some other great community reviews of Nakivo


Free NFR from Altaro


Over the last number of years we have seen a lot of virtualization companies showing their appreciation for industry influencers by offering up free NFR licenses for their products. Now we can add one more to that list – Altaro Software. Altaro has been a supporter of my blog for a little while now and with all my sponsors I try and help spread the word about any initiatives or giveaways they have that I find particularly interesting, and think that my readers may benefit from.

freeSo, without further ado, if you are a VMware vExpert or a Microsoft MVP Altaro has a gift for you! In appreciation for the work you do within the virtualization industry Altaro is offering up an NFR key of their flagship software Altaro VM Backup at no cost. Just follow this link to fill in the form to grab yours. Keep in mind, this a full featured NFR license of their unlimited edition – meaning you can backup as many VMs as often as you like. I went through the installation and configuration process of Altaro in my introductory post a ways back and found it very intuitive with a nice UI – certainly a product worth checking out! For some more in depth writings around Altaro may I suggest Vladan Segets series around the product.

With that, Thank you Altaro for your support in the community and happy backuping everyone!

Using PowerShell to mass configure the new Veeam v9 features

veeamv9Veeam v9 is here and if you have already performed the upgrade you might be a bit anxious to start using some of the new features that came along with it.  In my case I’ve already gone ahead and done my due dilligence by enabling and configuring some of the new features on a few test backup/replication jobs and I’m ready to duplicate this to the rest of the environment – problem being I have A LOT of jobs to apply these to.  As always I look to automation to solve this issue for me.  One, it is way faster and two, it provides a consistent set of configuration (or errors) accross my jobs – making it far more easier to troubleshoot and change if need be.  Thanfully Veeam provides a set of PowerShell cmdlets that allows me to automate the configuration of some of these features.  So, if you are ready to go let’s have a look at a few of the new features within Veeam v9 and their corresponding PowerShell cmdlets.

Just a note – for each of these examples I’ve just posted the code to handle one object – but you could easily surround the blocks of code with a foreach() if you are looking to apply the configurations to many objects.  <- this is what I have done however it’s easier and much easier to read if I just insert code dealing with individual objects.

Enabling Per-VM file chains

First up is the Per-VM Backup File Chain introduced in v9.  In previous version of Veeam all of the VMs contained within a single job were also contained within single backup files – In the end we were left with some massive backup files sitting on our repositories.  Having a massive file laying around isn’t such a big deal, but when the time came where we were required to manage or move that file in any way it presented a few problems – it took a long time to move and activity surrounding that file need be disabled until we were done.   In the end we were left with a lot of waiting and no backups.  The v9 Per-VM Backup File chain fixes this – it allows us to store our backup files on a per-vm basis, leaving them much easier to manage, not too mention the headaches that are saved if corruption on our backup files occur.  Either way I wanted to enable this on a dozen or so of my repositories…

I say repository since that is where the per-VM Backup Chain is enabled – not on the job, not on the VM, on the actual Veeam Repository.  The process of doing so is pretty simple, get our repository, set a flag to true, and call back the saveOptions() function – as follows…

$repo = Get-VBRBackupRepository -Name "Name of repository"
$repo.Options.OneBackupFilePerVm = $true

New Mount Server

In previous versions of Veeam before v9 certain restore operations required mounting backups to a Veeam backup server, which when dealing with remote sites could of resulted in increased bandwidth usage depending on how you had configured your environment.  v9 gives us the ability to designated any windows machine as a mount server.  The mount server can then be used as a mount point to perform file-level recovery operations, allowing the bandwidth to stay local at the remote site.

As with the Per-VM backup chains, mount servers are enabled on a repository level.  In my cases I wanted my repositories and mount servers to be one of the same – in order to do that I simply get the remote repository, then call Set-VBRBackupRepository passing it my mount host name and turning on the vPowerNFS flag as shown below…

$repo = Get-VBRBackupRepository -Name "Name of repository"
$repo | Set-VBRBackupRepository -MountHost (Get-VBRServer "Name of desired Mount Host") -EnableVPowerNFS

Guest Interaction Proxy

Another new ROBO enhancing feature in v9 is the ability to specify a guest interaction proxy.  Previously the Veeam Backup and Replication server handled deploying runtime processes into the VMs to facilitate different parts of the backup and replication jobs – in v9, we can now designate servers that may be onsite to do this – This helps in a couple of ways – first, this helps reduce traffic traversing our WAN and secondly, sometimes backup servers were isolated from the VMs they were backing up, prevening certain actions from even being able to take place.  Anyways, the Guest Interaction Proxy is a per-job setting and is setup within the VSS settings of the job.  In my cases I just needed to flip the AutoDetect to $true in order to get Veeam to select the proper GIP.

$job = Get-VBRJob -Name "Job Name"
$vssoptions = $job.GetVssOptions()
$vssoptions.GuestProxyAutoDetect = $True

Enable deleted file blocks

Veeam v9 has introduced many data reduction technologies within their application in order to help us save space and more efficiently manage all of our backup capacity.  The first technique we will look at is the ability to not backup deleted file blocks.  This can be enabled on your existing backup jobs by setting the DirtyBlocksNullingEnabled flag as follows.

$job = Get-VBRJob -Name "Job Name"
$joboptions = $job.getOptions()
$joboptions.ViSourceOptions.DirtyBlocksNullingEnabled = $True

Exluding certain folders/files

Another space saving feature inside of v9 is the ability to exclude or include certain files or folders contained with the VMs – think about Temp directories – under normal circumstances we don’t need them so why take up all that capacity backing them up.  We can set this up by first setting the BackupScope property – this can be set to exclude folders (ExcludeSpecifiedFolders), only include folders(IncludeSpecifiedFolders) or simply backup everything(Everything).  Depending on the setting of the BackupScope we then set the GuestFSExcludeOptions or the GuestFSIncludeOptions with an array of strings pointing to the desired folders – finally, saving our job options as follows…

$job = Get-VBRJob -Name "Job Name"
$jobobject = Get-VBRJobObject -Job $job -Name "VM Name"
$vssoptions = Get-VBRJobObjectVssOptions -ObjectInJob $jobobject
$vssoptions.GuestFSExcludeOptions.BackupScope = "ExcludeSpecifiedFolders"
$vssoptions.GuestFSExcludeOptions.ExcludeList = "C:\folder","D:\folder","c:\test\folder"

Storage-Level Corruption Guard on Production Backup Jobs (not just backup copy)

SureBackup does a great job at ensuring our VMs will boot however they may be certain portions of our data that can become corrupt that can actually pass a SureBackup test.  To help alleviate this, Veeam has introduced something called Storage-Level Corruption Guard (SLCG) to periodically identify and fix certain storage issues.  SLCG has actually been around in previous versions, but only available for Backup Copy jobs.  In v9 it can now be enabled on our production backup jobs, giving us one more piece of mind when the need to restore comes along.   This is enabled by first enabling the EnableRechek (yes, it’s spelled like that) flag, then setting a schedule (Daily/Monthly) and few other options and finally saving our options…  Below we’ve set a job up to perform SLCG on Fridays.

$job = Get-VBRJob -Name "Job Name"
$joboptions = $job.getOptions()
$joboptions.GenerationPolicy.EnableRechek = $True
$joboptions.GenerationPolicy.RecheckScheduleKind = "Daily"
$joboptions.GenerationPolicy.RecheckDays = "Friday"

Defragment and compact full backup file – on production backups not just backup copy

Over time our full backup files can become bloated and heavily fragmented – when we delete a VM for example, the full backup might still be holding onto certain data that was in that VM.  Normally we could take an active full backup in order to help purge this data, but as we all know that requires us to affect production and use up valuable resources.  To help alleviate this v9 has introduced the ability to defragment and compact our full backups on a schedule.  This is done very similar to that of SLGC, getting the VSS options of a job and setting the schedule.  Below we enable our defrag to run on Fridays.

$job = Get-VBRJob -Name "Job Name"
$joboptions = $job.getOptions()
$joboptions.GenerationPolicy.EnableCompactFull = $True
$joboptions.GenerationPolicy.CompactFullBackuScheduleKind = "Daily"
$joboptions.GenerationPolicy.CompactFullBackupDays= "Friday"

So there you have it – a little bit of automation for those that may have to update numerious jobs to fully take advantage of some of the features Veeam v9 has introduced.  As always please feel free to reach out if any of this isn’t working, or you have any comments, questions, concerns, rants, etc.  Thanks for reading!

Nice to meet you Altaro VM Backup


I’m happy to announce that Altaro is now a sponsor of this site and as I normally do with new sponsors I always like to give a little introductory post containing background on the company and the products they provide.  Altaro was founded in 2009 and have been making a stand in the VM backup space ever since.  Altaro’s early focus was within the Microsoft space, providing backup and restore operations on the Hyper-V platform with their product Altaro Hyper-V Backup.  Everything changed in September of 2015 when their product was renamed to Altaro VM Backup and support for VMware vSphere was added into the solution.

I had a chance to check out early builds of the VMware vSphere support within Altaro VM Backup as a member of their beta.  Timing and business prevented me from ever blogging about the product but going back through my rough notes and searching deep within my (limited)  memory I can say there are a few things that really stood out for me…


Installation of Altaro VM Backup was a breeze!  I utilized the free 1000 CPU hours that we get as vExperts from Ravello Systems to set this up – basically all that is needed is a few clicks and Windows machine to install the software.  It’s your basic Next Next done type of wizard driven install.    It should be noted that once everything is fully setup and configured, the Altaro Management Console can be installed on a remote machine as well, connecting to your main server over the network – meaning there is no need to RDP into the Altaro VM Backup console all the time – a simply connection from your laptop would suffice.


As far as configuration of Altaro VM Backup it can’t get much easier than this!   Altaro’s configuration can be applied to two basic questions that we ask ourselves when we are considering backup; What are we going to backup?  Where are we going to put it?  The first question is answered by simply pointing Altaro to your vCenter server (or individual ESXi/Hyper-V hosts) and providing credentials – from there Altaro will connect to the vSphere API’s and bring back and inventory of your environment.  The second question, Where to put it?, is just a matter of selecting your backup storage.  This can be either a network location (via UNC path) or a physical drive attached to the Altaro Management Console.  additionally Altaro VM Backup provides customers with a means to ship copies of your backups offsite as well.  This can be done either by rotating external USB drives, Network Paths (UNC) or to another instance of an Altaro server running at the secondary location.

Backup and whatnot…

Once you have some source VMs and target storage setup Altaro acts as you would expect, allowing you to set up scheduled backup jobs to run every hour/night/week, etc – or take one-off backups as well.  One nice feature was the ability to simply just drag and drop a VM onto your storage and have it create the job for you automagically!    There are a few other bullet points below that really helped sell Altaro to me

  • VSS support- meaning we can fully quiesce virtual machines to ensure consistent backups
  • Item Level restore support – meaning we can restore individual emails from Exchange, individual files from VMs, etc.
  • Full support for Microsoft Cluster Shared Volumes
  • Compression and Encryption
  • Ability to back up VMs to multiple locations
  • Individualized retention policies applied on a Per-VM basis.
  • Sandbox Restores – allowing you to test for backup integrity and restoreability of your backup files.

With all of these features packed into their first release supporting vSphere I can only hope to see more from Altaro!  Let me reiterate though, the main selling point of the software for me was not a certain feature or support for any platform – it’s the UI!  A clean, crisp, easy to use user interface should be of highly importance for any product which hits the market – a poorly designed on can make or break a customers reaction to your product!  Altaro has done a great job with theirs – their drag and drop functionality is awesome, and everything is easy to find – a very intuitive design!   See for yourself below!  Not to mention that I went from install to backup in less than 10 minutes, without the need to use any documentation!


So with all that Welcome to Altaro to the mwpreston.net family and you should expect to see me go into this software a bit deeper in the future – In the meantime if you want to try out Altaro for yourself you can do so for free – you can go either the 30 day trial route or simply use the product for free for 2 VMs FOREVER!   Needless to say if you are in the market for some backup don’t forget about Altaro!