Tag Archives: Backup

Setting yourself up for success with Veeam Pre-Job Scripts

For a while Veeam has been able to execute scripts post-job, or after the job completes – but it wasn’t until version 8 of their flagship Backup and Replication product that they added the ability to run a pre-job script, or a script that will execute before the job starts.  When v8 first came out with the ability to do this I strived to try and figure out what in the world I would need a pre-job script for – and for the longest time I never used it in any of my environments.  If a job failed I would execute post job scripts to run and hopefully correct the reason for failure – but a while back it kind of dawned on me – and with a bit of a change in mindset I realized something – Why fail first?


Why fail when success is possible?

As I mentioned above I’d grown accustom to using post-job scripts to correct any failing jobs.  For instance, there were times when for whatever reason a proxy would hold on to a disk of one of my replica’s – subsequently, the next run of this job would fail trying to access this disk – and even more importantly consolidation of any VMs requiring it would fail as the original replica couldn’t access the disk mounted to the proxy.  What did I do to fix this?  Well, I added script that executed post-job looking to simply unmount any disks off of my Veeam proxies that shouldn’t be mounted.

Another scenario – I had some issues a while back with some NFS datastores simply becoming inaccessible.  The fix – simply remove and re-add them to the ESXi host.  The solution at the time was to run a post-job script in Veeam.  If the job failed with the error of not being able to find the datastore then I ran a script that would automatically remove and re-add the datastore for me – Next job run everything would be great!

“Fail and Fix” or “Fix and Pass”

So, the two solutions above, while they do fix the issues they do it after the fact – after we have already failed.  Even though it fixed everything up for the next run of the job I’d still lose that one restore point – and sure enough, the time WILL come where it’s that exact point in time you will need to recover from!  The answer to all this is pretty simple – migrate your post-job scripts to pre-job scripts.  Let’s set ourselves up for success before we even start our job!  Although this may seem like common sense – for whatever reason it took a while before I saw it that way.

So with all that – hey, let’s add some code to this post.  Below you will find one of my scripts that runs before each Veeam job – my proactive approach to removing non-veeam proxy disks from the proxies!

Add-PSSnapin VeeamPSSnapIn
Add-PSSnapin VMware.VIMAutomation.core
Connect-VIServer vcenter.mwpreston.local -u username -pass password 
# get job name out of parent process id
$parentpid = (Get-WmiObject Win32_Process -Filter "processid='$pid'").parentprocessid.ToString()
$parentcmd = (Get-WmiObject Win32_Process -Filter "processid='$parentpid'").CommandLine
$jobid = $parentcmd.split('" "')[16]
$vbrjob = get-vbrjob | where-object { $_.Id -eq "$jobid" }
#get some info to build replica VM names
$suffix = $vbrjob.options.ViReplicaTargetOptions.ReplicaNameSuffix
$vms = $vbrjob.getObjectsInJob()
#create array of replica names
$replicasinjob = @()
foreach ($vm in $vms)
 $replica = $vm.name+$suffix
 $replicasinjob += $replica
#loop through each replica and check Veeam proxies for foreign disks
foreach ($replicaitem in $replicasinjob)
 $replica = $replicaitem.tostring()
 Get-VM -Location ESXCluster -Name VBR* | Get-HardDisk | where-object { $_.FileName -like "*$replica*"} | Remove-HardDisk -Confirm:$false

So as you can see this is a simple script that basically retrieves the job name it was called from (Lines 7-10) – By doing it this way we can reuse this block of code in any of our jobs.  Then simply searches through all of the disks belonging to Veeam proxies (Line 28) – if it finds one that belongs to one of our replica’s we are about to process, it removes it.  Simple as that!  Now, rather than failing our job because a certain file has been locked, we have set our self up for a successful job run – without having to do a thing!  Which is the way I normally like it 🙂  Thanks for reading!

Vembu BDR in the lab – Part 2 – Recovery!

vembu-logoIn part 1 of this review we’ve backed up our data and ensured that it is restoreable.  Now let’s continue our review of the Vembu BDR suite by having some fun and actually start restoring some data!   That said, before we jump right into performing a restore I want to talk a little bit about a nifty technology that BDR uses for most of its’ restores– the Vembu Virtual Drive.

What’s the Vembu Virtual Drive

The Virtual Drive is a technology that is installed when we initially install the BDR suite and is essentially a second hard drive on our BDR server.  This drive is used by Vembu to hold metadata, or pointers if you will to mounted backups on our BDR server, therefore it requires very little extra space, only 512MB to be exact as it just points to already created backup files.    As we can see below, when a backup is mounted Vembu actually makes your backups available in many different formats.  For instance, the backup shown below is for a physical windows server, however Vembu presents us with this data in both VMware and Hyper-V formats (vmdk, vhd) as well as IMG formats (ISO).  Although displaying the backups in these various formats is in itself not magic, the fact that BDR does this within seconds is truly incredible – and not just for full backups either.  Incremental backups can be displayed and restored to the various formats in seconds as well!  The secret sauce to all of this lies in the Vembu Hive file system – which provides the means for BDR to efficiently expose it’s incremental backups a virtual full backups without having migrate or merge any data at all – meaning you can get at your data almost immediately and restore to almost any platform with ease!


Vembu uses these various formats presented by the Virtual Drive technology to power most all of it’s restores, and this is how BDR can take a physical server and convert it to a virtual machine – or a VMware VM and convert it to a Hyper-V VM.  This is a functionality I’ve not seen in many other backup products and is very useful in my opinion!  Thankfully, this functionality is exposed within the UI as well, so if we wanted to manually mount a backup without doing a restore we could do so by simply clicking the ‘folder’ icon within the mount section of the Recovery screen (as shown below).


Although we could use this manual mount to pretty much restore any source to any target platform it would certainly take some manual configurations and probably a little time on our end to do so.  For this reason, Vembu has some built in restore workflows that we can use by clicking the ‘Restore’ icon showed above. Depending on the source of of our production data, whether it was  Hyper-V, VMware, or a physical server we will get some different options as it pertains to restoring our data.

vemburestore1 vemburestore2 vemburestore3

As we can see, Instant VM Recovery, File-Level Recovery, and Download are available no matter what the source platform was – with Live recovery options added to our VM based backups, and a disk/partition level recovery included with our physical machine backups.  To go through all the recovery options within all of the supported source types would be a very time consuming, and quite honestly, duplication of work as the wizards and steps used to perform each type of recovery are very similar – which is a good thing!  Instead, let’s take a look at some of the most popular recovery options and go through the motions.

Instant VM Recovery

Let’s first take a look at Instant VM Recovery within Vembu BDR, and for this we will do so inside of our Hyper-V environment.  Instant VM Recovery really lives up to it’s name as it provides us with a ready state VM which is a duplicate of our production in almost an instant fashion.    To do this Vembu uses a couple of different technologies depending on the platform the BDR server is running on.  For Windows based installs, Hyper-V is leveraged – For Linux, KVM is the technology of choice.   Basically Instant VM Recovery takes our deduplicated and compressed backup files and directly boots them on the selected hypervisor, essentially transporting our backup files into a VM.  Certainly performance of our VM will be impacted when running from these backup files, but this is more than made up for by the fact that we can instantly provide access to production resources, while we leverage something like Live Migrate or vMotion to move our newly recovered VM back to production storage!

To get started with this type of recovery simply select ‘Instant VM Recovery’ from within the ‘Recovery’ section of Vembu and  we are brought into the ‘Restore Version’ settings.  Here, we simply select the restore point, or restore version of which we would like to instantly recover to.


From the ‘Restore Data’ section, we are prompted to select the data (or in this case, the VM) that we wish to restore.


The ‘Restore Options’ step gives us a little flexibility on how we would like to recover our VM, and the options presented greatly depend on the ‘Instant VM Recovery’ mode we select and the platform our BDR server is installed on.  For instance on a Windows based BDR deployment, Hyper-V VMs can be instantly recovered in Hyper-V mode, whereas VMware VMs can be instantly recovered to either Hyper-V or VMware mode.   Depending on which Instant Recovery Mode we select, different options will present themselves.

  • Hyper-V – Instantly recovers VM on the Vembu appliance configured with the Hyper-V role
    • Startup RAM – the amount of memory granted to start the newly created virtual machine
    • Configure Network Details – Allows us to specify an IP Address and Subnet mask for our new VM
  • VMware – Instantly recovers VM on a specified VMware host.
    • Target VMware Server/Datastore – specifies where we want to run our new virtual machine
    • VM Name – name of our new VM.

Since we are using our Hyper-V environment, I’ve selected Hyper-V here and provided the necessary information…


Once clicking through the review and progress screens our VM will actually be created within the Hyper-V environment directly on our Vembu Appliance, utilizing no extra space as the data is coming from the already existing backup files.  As you can see below, I have a couple of Instant Recoveries which I’ve initiated.  What we are left with is a VM booted completely from our backup files, instantly recovered.  So in turn, we could instantly provide our end users access to the VM, and take our time migrating this VM back to a production environment.


File-Level Restore

The file level restore wizard (Recovery->Restore->File-Level Recovery) within Vembu works in a similar way to that of Instant VM Recovery, where we select our VM to recover, its associated restore point and finalize the wizard.  However unlike Instant VM Recovery, File-level restore doesn’t create a new VM within our environment – instead it takes the contents of the virtual or physical machine backup and mounts them to a local drive within our Vembu BDR server.  Shown below is file system on my Vembu BDR server – as you can see with a few extra drives.  G:, being the drive belonging to the machine from our physical server, and Y: being a drive belonging to our backed up virtual machine, as I’ve initiated File-Level recovery for both those servers.


Once these drives have been mounted we can simply access the backed up data and do as we please with it, whether it be copying back to our production workloads or to another spot of our choosing.  When we are done, we simply just need to go back into the BDR UI ‘Recovery’ section ‘Unmount’ our backups using the folder icon as shown below…


Once we have unmounted our backups we can see that the folder icon flips back to being closed.

Disk Level Recovery

Disk level recovery within Vembu BDR does just that – recovers individual disks, or partitions in the case of a physical server back to their original locations.  That means individual vmdks can be restored to a VM, or individual partitions back to a physical server or workstation.  Again, the wizard to perform a disk level recovery is much the same as all of the other recovery types, selecting a restore point and source, however differs only when selecting a target.  As shown below we can see the disk level recovery options for restoring a single VMware disk back to the original VM by specifying the target ESXi/vCenter, along with an associated VM and datastore.


After completing a Disk Level Restore a new VMware VMDK disk is created and attached to the VM you have chosen.


Finally we make our way to the last recovery option within Vembu BDR – the download option.  Download uses the Vembu Virtual Disk technology explained earlier to mount the backed up VM and allow us to pull down various images of the backup, including VMDK’s, VHDX, and Raw ISO.   That said, simply mounting our backup manually always selects the latest restore point whereas the download option allows us to specify any restore point on disk.  Again, this is a really cool feature that I don’t see in a lot of backup software – being able to essentially convert your VM from platform to platform by means of the backup is a neat idea.  Although there would certainly be some work to get everything working, it’s definitely a great starting point.


Failing over a replica

Now that we have got through all of our options as it pertains to restoring backups it’s time to look at one last restore technique that BDR provides – replication failover.  If you can remember back to part 1 of this review we went through the process of setting up a replication job within our vSphere environment, and replicated one of our VMs (MSDC) – now we will take a look at the process of failing over to that replica.   Replica failover within BDR is different than restoring a backup as we do not need to transfer any files or data – our backup is essentially a replica of the VM, sitting on a host in the native VMware format.  Now we could if we wish just simply power on this VM from within our vCenter environment, but by initiating the process through Vembu we gain a bit more in terms of functionality and option.

To begin failing over of a replica we need to navigate to VM Replication->Manage Replica’s.  The process can be kicked off by clicking on the ‘Restore’ button as shown below.


As shown below we have a few options as it pertains to managing our replicas; Failover, Finalize Failover and Finalize Failback.  Let’s first take a look at the at the Failover option.


The next few options in the wizard after selecting Failover are similar to what we have seen in almost all of the BDR wizards, selecting a restore point and selecting our restore source.  After we have done that and clicked the ‘Failover Now’ button the review section our failover process will initiate…


As we can see above our failover request has completed – which means we have successfully completed the failover to our replicated VM.  This means operations are essentially switched from our production VM to our replicated VM, with any network mapping or re-ip rules being applied to it.  This failover state however is not permanent and needs to be finalized in some sort of way.  By default, BDR performs a snapshot on the replicated VM before it becomes active in order to allow us to revert back to various pre-failover states – which means simply failing over becomes a temporary step that needs to be further finalized.  By clicking the same restore button on the Manage Replica’s screen, and this time selecting the ‘Finalize Failover’ option we are presented with the following options on the Finalize Type section of the wizard; Undo Failover, Permanent Failover, and Failback.


The ‘Undo Failover’ option will essentially undo any changes that we have made to the environment – meaning the production VM will once again become the active VM, and the replicated VM discards any changes that were made to it while it was in the temporary failover stage and reverts back to its original state.  This option is normally used if the source VM gets restored or becomes successfully active again.

The Permanent Failover option is basically the opposite of the Undo Failover – it commits our changes to the replicated VM and in essence makes the replicated VM the new source VM, permanently.  This option would be used if we are absolutely sure that our source VM is no longer recoverable and want to permanently run from our replicated VM.

Finally the Failback option gives us the ability to failback to our production site.  This process will recover our replicated VM, along with any changed data while it was in a failover state, back to our production site, either on the original host or another host we prefer.  Again, just selecting Failback doesn’t commit the failback, it leaves us in another temporary state that needs to be finalized.  The options for finalizing a failback are…

  • Permanent Failback – If we have failed back to production and we are happy with how things are working, this option commits our actions – our newly created production VM becomes our production VM and is automatically excluded from any current replication jobs
  • Undo Failback – just as with failover we have the option to undo our failback – if something happens during the failback or we find that the failed back VM is not performing as expected this option will revert our production workloads back to the replicated VM.

vSphere replication within BDR has many options available and I like how each step can essentially be reverted.  If you have engaged in failing over production VMs then you are most likely already in a disaster state – having the ability to undo both your failover and failback processes is certainly a nice thing to have during these times if something goes awry.


Wow – I think this is probably one of the longest reviews I’ve ever tackled, but the fact of the matter is I’ve only covered the core technologies which are included in the Vembu BDR Suite.  In addition to everything we have covered here Vembu provides backup and recovery for individual application objects such as SQL databases or Exchange emails, along with SaaS based protection for services like Office365 and Google Apps.  Vembu also provides integration into their cloud services, or even Amazon if you so please.  The point is, the Vembu BDR Suite is an all-encompassing product that can provide complete protection across your environment, be it on premises or off premises!

There is lots to like about Vembu BDR – First off I think of commonality –  Although BDR requires multiple different applications to make up the suite, all of the those combinations provide a similar UI – same colors, same look and feel, and same types of functionality.  Even within single applications I see commonality – With the exception of maybe one step, the screens you go through in order to complete a restore are pretty much the same, whether it be file-level, VM-level, disk-level, or even failing over a replica – all the processes have a similar feel to them.

Also, the BDR console, or the main UI is nice in the fact that it pulls in the information from all of the other applications – I can see the status of the backups and restores that I’ve made to my physical machines inside of the BDR interface, even though they were essentially setup from the ImageBackup application.  In all honesty I would love to see Vembu somehow port the process of deploying the ImageBackup application and it’s associated backup configuration directly into the BDR server – as then you would get a true one stop shop for managing those backup jobs, whether they be physical or virtual.  That said, at the very least we are able to report on all of our backup jobs from one UI – maybe we will see something like this in a future release.

Aside from commonality and overall management probably the most exciting piece of technology I found within the BDR suite is the HIVE file system and the associated Virtual Drive technology.  Being able to display the backup files out in formats such as vmdk, vhdx and ISO is a pretty nifty feature – but doing so in a matter of seconds, no matter what the source of the backup was is very impressive indeed!

With all that said I would most certainly recommend Vembu BDR suite, especially to those companies looking for a common interface to protect their VM backups, physical backups and SaaS backups all with one application.  From my experience the product has performed very well, completing backups and replications all within a decent time frame with some impressive performance results.  All this said, you don’t have to simply take my word for it – you can download the product yourself, and by default, all downloads come with a 1 month free trial so you can begin testing it out on your own terms.  Aside from the BDR Suite,  Vembu offers a wide variety of products such as a monitoring solution to centralize the monitoring of all of your backups as well as some which are absolutely free such as Desktop Image Backup and the Vembu Universal Explorer for discovering application items within Microsoft apps.   I mentioned before that I’ve not heard of Vembu before starting this review – and if you haven’t, you certainly should check them out.

Vembu BDR in the lab – Part 1 – Backup!

vembu-logoI’ve done quite a few reviews over the past five years or so on this blog, and for the most part I always had somewhat of an idea who the company was that I’ve been writing about – however this time things are a bit different!  I’ve never heard of Vembu before they gave me a shot to test out their flagship software, Vembu BDR Suite, which is why I was pleasantly surprised when I started doing a bit of research on the company!    Vembu was founded a way back in 2002, and even though their first products to market were created around SQL migrations their main focus then was backup, and has been now for over a decade!.  Being in the backup business for 10 years you would think I would’ve heard their name, however Vembu was a more “behind the scenes” product – firstly aimed at Managed Service Providers who at the time would white-label their software for resale – something I, or anyone within the community wouldn’t have noticed!  It’s that 10+ years of experience that shines through in their new release of the Vembu BDR Suite.  Now what surprised me the most was the shear number of products and supported platforms that Vembu BDR supports – Vembu can backup vSphere, Hyper-V, Physical Servers, Desktops, Exchange Items, SharePoint Items, SQL Items,  Office365, Google Apps, etc. – and they can back that up on-site, off-site, or even to the cloud, be it a Vembu cloud or Amazon – not to mention that most of this is done within a single UI – Impressive!  With all of this going on I was surprised about how little I’ve heard of them.  Either way I’ve seen lots around the community as of late from Vembu and was excited to get them in the lab and try out some of their tech!

Now, due to the sheer amount of supported platforms and options I’ve decided to break this review up into a couple of parts.  This part will focus solely on setting up our backups, be them physical, VMware and Hyper-V, along with explaining a little about Vembu’s Image Verification that ensures we have restoreable backups when the time comes that we need to use them.  The next part will explore what really matters when it comes to backup – and that’s restoring!  Vembu provides a lot of flexibility when it comes to restore, object-level, file-level, VM level, etc. – and really has a unique take on how they expose their backup files for restorations such as their HIVE file system and Virtual Drive mounts.

So with that all said let’s get on to the show….

Getting Vembu in the lab

We will quickly go over deployment and configuration as getting Vembu up and running is very, very simple.  Vembu provides a number of options to get started with the BDR Suite – think Virtual Appliance for both Hyper-V and VMware, Windows, Linux, etc.  Again, not only providing support for backing up multiple platform environments, but providing choice when it comes to what environment you want to run Vembu in.  – I’ve chose to go the Windows route but either installation method provides you with the same, easy to follow wizard-driven install.    Essentially, the Windows install simply prompts for a password for the MySQL server, a location for the mongoDB, the system/user account to run the Vembu BDR service under, a username and password for BDR itself, and a location to use for the default repository where BDR will store it’s backups.  Not a lot of information to have to provide for the amount of configuration the installer actually performs.

As far as configuration goes there isn’t a lot we need to do after installing Vembu BDR.  We could navigate to the Management menu and set some things up such as the time zone and SMTP settings for reports, however you can also just get into the nitty gritty – which is what I chose to do…

Backup Time

As mentioned above Vembu BDR supports both VMware and Hyper-V, along with physical servers, workstations, etc.  As well, we can process restorations on object level items such as MS Exchange emails and SQL databases, along with backing up SaaS based applications like Office365 and Google Apps.  It would be impossible to take a look at every one of these on one review, so for us we will focus on probably the three most popular platforms; VMware, Hyper-V, and Physical.

Backing up VMware vSphere VMs

In order to start backing up our VMware environment we will need to provide Vembu with some information in regards to either our standalone ESXi hosts or our vCenter Server.  By selecting ‘Add VMware vSphere Server’ (after going to Backup->VMware vSphere) we are able to start processing virtual machines which live on either our vCenter Server or a standalone ESXi host.  The process of adding our vSphere environment is quite simple and accomplished by simply entering in the hostname/ip of our vCenter Server/ESXi host and passing along some credentials – a pretty common process in any application which requires data from vCenter.


Once our server has been added we can immediately begin to back up our VMs.  Creating a job within BDR is done through the Backup menu (shown above) and then selecting the ‘Backup Now’ option next to the vCenter/ESXi server you wish to process – doing so will begin our 5 step process in creating our first vSphere backup job within Vembu BDR.


The first step we need to complete is telling BDR exactly what we wish to backup.  As you can see above, I simply selected the checkbox next to my desired VM and clicked ‘Next’.   The release of BDR 3.6 added the support of backing up complete clusters as well and if you wish to exclude and VMs (if selecting to backup a complete host/cluster) or VM disks this can be done by simply click the ‘VMs/Disk Exclusion’ button.   A pretty easy and self explanatory process….


As far as scheduling goes we have the option to select to run our backups hourly, daily, or weekly, and set our desired days/hours in which we would like to execute the job.  With the smallest option of running our backup jobs every 15 minutes you can be sure to prevent as much data loss as you can.  Here I’ve left the defaults as is, to run hourly on every day.


As we move into the retention configuration we start to see some more advanced functionality that BDR provides us with.    Before explaining all of our options its best to describe a little bit about how BDR stores its backups on disk.  During the first run of a backup job, BDR will take a full backup of our VM from the production storage – each subsequent run will leverage vSphere’s Changed Block Tracking and copy only the differences, or those blocks which have changed within the VM since the last backup and then store them in incremental files.   When the retention period is hit, BDR copies the oldest incremental restore point and injects it into the full backup file, releasing that space to be used as free capacity.  With all that said let’s take a look at the retention options available to us.

The default retention setting on this step, Basic Retention, essentially means we will keep 1 full backup and X amount of incremental backups on disk – so if we select 3 has our retention policy, we will be left with one full and two incremental backups, giving us a total of three points in time in which we can restore to.  By selecting Advanced Retention we can apply a more robust Grandfather-Father-Son (GFS) backup solution to our VMs.   GFS retentions within BDR allow us to merge our backups into additional full backups, ensuring we are covered when it comes to meeting compliance on restore points.  Think of situations such as having hourly incremental backups take place, while merging daily into a daily full, in-turn merging our daily fulls into weekly fulls, and so on and so forth for monthly and even yearly.  Essentially, GFS allows us to have the benefits of having small RPOs, while maintaining the assurance and compliance of having the number of daily/weekly/monthly restore points our business requires.

Aside from retention, we also see a couple of other settings on this step.

  • Application-Aware Options – BDR can utilize VMware tools to invoke the VSS writers within a VM in order to ensure that the backups are application consistent – it’s here where we setup options such as how to proceed if application processing fails, as well as specify which credentials to use and how to handle transaction logs (truncate or leave alone)
  • Additional Full Backups – BDR by nature creates one full backup along with many incremental backups following it.  Having a long chain of incremental backups may be ok in some environments, but enterprise organizations may want to have additional full backups performed periodically to sit in between the incremental backups.  Here we can setup full backups to be performed in addition to the incremental on  an hourly, daily, or weekly schedule.  We can also limit the number of full backups we want to keep on disk as well in order to preserve space and capacity on our repositories.


The Review Configuration step is just that – a step that allows us to specify a name for our backup job, as well as review the selected configuration we have made in the past three steps.  From here, we can simply click ‘Run the backup’ to execute our newly created job.


After committing our backup job we are now brought directly to the ‘Progress Details’ section.  No matter what schedule we provided, Vembu will always immediately begin the first full backup of the VMs specified.  Here we can see the associated tasks and events, as well as transfers and progress rates of our newly created job as it runs.  This progress screen isn’t cluttered with a bunch of statistics that we don’t necessarily need to see –  It’s a clean and simple UI that simply shows you a percentage complete, along with the total backup size, and currently running transfer rate – everything we would want to know about a job in progress.


Once our first job has been setup we can view details about it by navigating to Backup->List Jobs.  Here we can do many things such as suspend (disable), edit, or delete our jobs, as well as view their current running status.  Clicking the Reports icon will allow us to drill down into more detail about the last run of the job, showing us the size, status, and time taken to perform the job.

Essentially this is it for creating and running a backup job for vSphere within Vembu BDR, however there is one more vSphere protection strategy for VMware that Vembu provides – Replication.

Replicating VMware vSphere VMs

Selecting VM Replication from the top menu, then clicking the Replicate Now icon next to your desired vCenter server will introduce us to the Replication wizard.  The Chose VMs and Configure Scheduling options are the same as that of the backup process we went through earlier, allowing us to select our desired VM and setup an hourly/daily/weekly schedule to perform replication.  That said, things start to change a bit as we move into third step of the setup.


The Target Replication Host section is where we specify where we would like to host our replicated virtual machine.  Here we can select an existing vCenter/ESXi host or ‘Add DR VMware Server’ if we would like to add a new one.  Also specified is the datacenter and datastore where the replicated VM will reside.  Since this will be an exact copy of our VM we have the option to add a suffix to our VM as well in order to distinguish it from that of our production VM, as well as set our desired retention policy on the replicated VM.


Network Mapping allows us to map between our source and target networks.  Normally, when replicating to an offsite location we may have different network names within our vSphere environments that we attach our VMs to in order to gain network connectivity.  Network mapping allows us to configure a table of sorts that will map our source production networks to networks within our DR site, eliminating a mundane and time consuming step when it comes to actually failing over our replicas.


Re-IP Mapping allows us to specify some rules around what our replica IP address will be, supporting the situation where we may have different addressing schemes between the production and replication site.  Adding a Re-IP rule is as simple as clicking ‘Add Rule’ and specifying the source and target IP schemes, along with DNS and Gateway information.  Then, during failover, BDR will automatically apply these Re-IP rules to the replicas in order to ensure we have the proper connectivity during a disaster.


Again, just as with our backup job, once configured we will be brought to the same simple, clean progress screen outlining everything we need to know about the running process of our replication job.

Hyper-V Backup

Adding a Hyper-V server into Vembu is fairly straightforward and similar to that of adding a VMware host.  In terms of Hyper-V we can had either a standalone host, or an SMB host.  Either way, once added Vembu will push what they call ‘Integration Services’ to the hosts in order to handle the processing of VMs.  In order to do so, BDR will need some sort of administrative privileges, meaning you will have to ensure that the service in which Vembu runs, runs under an account with sufficient privileges to go out and install software on your Hyper-V hosts.    Once providing the proper credentials and hostname of your Hyper-V host, the integration services are deployed and the host will be available for you to backup inside of Vembu.   These integration services contain a number of technologies that are used to create the full backup as well as a CBT driver in order to track modified blocks for incremental backup.  If our disks are stored on an SMB host, we will need to deploy the Vembu Integration services here as well.


Similar to vSphere we can begin creating jobs directly after adding our Hyper-V host by clicking the ‘Backup Now’ button.


Just as we did in our vSphere backup we simply check the VMs we wish to include in this job, and use the ‘VMs/Disk Exclusion’ button to backup only certain VMs or disks within the VMs.













As we can see above, the backup options are identical to that as were in the VMware backups.  We have the option to set our schedules and apply different retention policies such as GFS, restore points, etc., as well as configure options related to Application Aware and additional full backups.


Again, after naming and creating our Hyper-V job we get a nice backup progress screen that auto refreshes throughout the backup process.  As we can see here we are getting some very nice performance with our Hyper-V backup job, processing over 500 MB/s!

Physical Machine Backup

Aside from just backing up Virtual Machines, Vembu BDR provides data protection for both physical servers and desktops as well.  As shown below we have a couple of options when we browse to ‘Backup->Physical Machine’;  Physical Images and Files and Applications


What each option does is actually quite different.  The ‘Physical Image’ option allows us to backup and process a complete physical machine and store it as an image file, while the ‘Files and Applications’ allows us to backup just those files or application item such as Exchange mailboxes, SQL databases, etc. that we prefer.  No matter which option is selected we are taken to a page within Vembu.com to download the respective client which will need to be installed on the physical machine we wish to protect.  Let’s go ahead and take a look at the physical image component of Vembu BDR.

The client/agent installation is quite simple; just requiring you to specify a globally unique ID that will allow you to identify the machine within Vembu BDR.  After that it takes care of pulling down all of the perquisites and required packages needed to run.  Once installed we can go ahead and run the Image Backup web console, which will takes us to a familiar UI, similar to that of Vembu BDR.  Keep in mind these clients are installed on the physical server we wish to backup, not the Vembu BDR server.


After logging in and setting a time zone we simply need to point our client to our desired Vembu BDR server.  Once connected we will be redirected directly into our backup job setup as shown below….


As we can see, the layout and UI of the wizard is exactly the same as that of the Vembu BDR server where we were setting up backup jobs for our hypervisors and VMs.  It’s nice to have this uniformity between the different components that make up the BDR suite.  Also as we can see above, before we are able to create our backup we will need to install the Vembu ImageBackup Disk image driver.   I asked Vembu why they opted to go the route of having a second install for the image driver rather than simply bundling it in.  Their answer has to do with reboot polices – rather than set a reboot policy and automatically reboot our production workloads, Vembu gives us the option to simply install first and reboot when it is appropriate for us.  Either way, after rebooting and re-authenticating to the client, the same wizard appears as below…


The first step, just as in a Virtual Machine backup is to chose our source – what we want to backup.  In terms of a hypervisor this includes VMs, however when dealing with physical machine image backups, this includes physical disks and partitions within the server itself.  As we can see above I’ve chosen to backup all of my partitions on my physical machine.


On the next step we once again see the familiar scheduling settings as we did within Virtual Machine backups.  Set our desired schedule, as well as our target backup server and click ‘Next’.


The last configurable step of the wizard allows us to specify the retention on the backups we take.  The same options that were available to us within Virtual Machine backups are also applicable to physical server backups as well – meaning we can setup GFS backups, Application Aware settings, as well as schedule additional full backups to be kept on disk aside from the set retention policies.


Once completed our initial full backup will start, and we will be left with a nice progress screen just as we have seen within the Virtual Machine backup.   The backup of a physical machine is pretty straightforward and easy but I would still like to see some way of deploying these clients out centrally from our BDR server, as well as setting up the initial backup jobs for them – maybe in a future release!   It should be noted though that although we do not centrally create these physical backups through BDR, we can indeed report on them –  As we can see below, the screenshot to the left is the reporting from ImageBackup locally on the physical server, with the corresponding screenshot on the right reporting on our physical backup amongst all other VM backup operations (Never mind the failures as I had some VMs powered down in my lab during certain times).  Also we can see that the performance and deduplication provided by ImageBackup is very good, taking only a handful of minutes to backup roughly 22 GB, and compressing down to 16GB.



Aside from simply setting up the backup job, the Image Backup UI contains some other useful information and configuration options as well.  Under configuration we can see a number of options.

  • User Management – allows us to create users and grant access to the Vembu ImageBackup UI
  • Backup Schedule Window – allows administrators to define certain times of certain days where backups should NOT run – thus guaranteeing that our backups will not impact our business during certain production hours.
  • Bandwidth Throttling – This can be used as another means to limit the impact of backups on our production networks.  Bandwidth throttling allows us to limit the network bandwidth to a certain amount of MB  that is consumed by backup jobs.  We can do this by always throttling or by throttling only between certain times of the day, with the option to include or exclude weekends when the production network may not be in use.

Image Verification

So now that we have setup multiple jobs, backing up both VMware and Hyper-V, as well as a physical machine the next logical step is to perform some restores!  That said, before we go into the restore process I wanted to talk a bit about what Vembu calls Image Verification.  What Image Verification does is ensure that before we go to perform a restore on a given backup our data will indeed be restorable, correct, and in-tact.  Vembu’s Image Verification processes in a tiered approach, attempting to detect a lot of factors that may cause  failure during a restore…

  1. Mount Check – A mount check basically takes our backed up VM and performs a mount to the BDR server.  This ensures that if we ever need to perform instant mount during a DR scenario that we will be successful.   We will talk more about this mount process in part 2 of this review.
  2. Boot Check – The last step of the verification where our backup is booted up within a Virtual Machine.  Once booted, Vembu takes a screenshot of the booted VM and stores it within the configuration – allowing us to get a visual “piece of mind” that our backups are restorable.
  3. Integrity Check – This is an optional step of the verification as it performs a chkdsk on our VMs which can take quite a bit of time.

Vembu’s deployment of Image Verification is not something that we need to schedule as it is in other backup products – instead, Vembu, by default automatically runs the verification process once a day.  Certainly this is a nice feature to have as nobody wants to go and restore a VM and find out that the backups themselves are corrupt!

Stay tuned for Part 2

So we have went over the backup of vSphere, Hyper-V and physical environments, as well as touched on how we perform items such as vSphere replication and Image Verification.  What I really liked about going through the process of each item was the uniformity – no matter what we were doing the wizards and configuration of jobs was very very similar – no need to learn new terminology and processes when switching source environments.  In our next post we will take a look at what really matter, recovery.  Going through all of the different restore types as well as replica failover.  For now, if you wanted to get started with your own Vembu deployment you can do so by downloading a 1 month free trial of the entire BDR suite!  Thanks for reading and stay tuned for part 2.

Nakivo Screenshot Verification – Getting to know your backups!

Picture yourself in this scenario – you walk into work on a Monday morning where you are promptly greeted by just about every IT staff member in the building.  They quickly tell you that certain mission critical services are down and the company is losing money as we speak.  But that’s OK, you are here, the infamous backup guy – the guy no one cares about unless things go down.  None the less you sit at your desk and begin the restore process.  “No problem”, you say, “services should be back up in 10 minutes or so…”.  The VM is fully restored and powered on – you sit, watching the console and the Windows spinning wheel and all of a sudden you see this!


Yikes!   Your backups, just like their production counterparts are corrupt – you try a different restore point, no go, they are all corrupt.  Believe it or not this is a common scenario that is played out inside of organizations everywhere.  Backups, just like any other production workload or service need to be tested in order to ensure that they are indeed restorable.  The best way of testing these backups is definitely done by performing full restores on the data – however doing so after each and every backup job can be quite time consuming and inefficient in terms of resource usage.

Enter Nakivo

nakivoNakivo does a great job at backing up and protection your VMware and Amazon environment and has a lot of features included within their product, Nakivo Backup and Replication.  I’ve previously reviewed the Nakivo product as a whole here if you’d like to check it out.  This post, will focus more on one feature – a feature that helps to prevent situations like the one above – Nakivo Screenshot verification.  There is no worse feeling than having terabytes of backups that prove to be unreliable and in turn, useless – which is the exact reason why Nakivo has developed Screenshot Verification inside of their Backup & Replication software – to give you piece of mind that when push comes to shove, your backups will indeed be restorable!

What is Screenshot Verification?

Screenshot verification is a simple concept with a lot of underlying technology at play – in its basic form, Nakivo will verify each VM backup after a new restore point is completed.  This is done by booting the VM directly from its’ corresponding deduplicated and compressed backup files located on a Nakivo backup repository, a process that Nakivo calls Flash VM Boot.  During a Flash VM boot Nakivo creates a new VM on a specified ESXi server.  It then takes the disks from within the backup files and exposes them as iSCSI targets, upon completion, the disks are mounted to the new VM as vRDMs.  A snapshot is created in order to provide disposal of any changes and the newly created VM is powered on, isolated from the production network.  Once booted, Nakivo utilizes VMware tools in order to take a screenshot of the booted OS.  After the screenshot is taken the newly created VM is discarded and backup files are brought back to a consistent state, and the screenshot is included within any job reports, either generated through the UI or emailed.

It’s this screenshot that gives you the “piece of mind” that when the time comes to restore your VMs, they will indeed be restorable!  A simple picture of the Windows login screen or Linux bash shell, or lack there of, certainly would’ve helped in the above scenario – alerting us that the next time we try and reboot our production VM or restore from a backup that problems may occur – giving us the leeway and time to fix the situation or restore to our last known good restore point on our own terms rather than doing so during an emergency.

How do we set this up?

As far as how to setup and configure Nakivo Backup and Replication as a whole I would recommend checking out my previous review here – but focusing solely on Screenshot Verification let’s go through the steps below…  **Note, we are setting this up for one of our backup jobs, however we can also enable screenshot verification for our replication jobs as well **

Screenshot verification, all be there a lot of moving parts underneath is actually a simply Enable/Disable feature within the backup job.  Nakivo has done a great job of abstracting away all of the complicated technology underneath and presenting us with some simple and easy to use configurable options.  On the last step of a job wizard, we see the Screenshot verification setting at the bottom of the first column (as shown below)…


Upon selecting ‘settings’ we are presented with some more options which we can configure.  The target container is the placeholder in which we will register the newly created VM that will be mounted to the backup files.  This can be your average vSphere object that VMs belong to such as a host or cluster.  Target datastore is where we would like to place the configuration files (vmx) of the VM that is created.  Verification Options allows us to do things such as limit the amount of VMs which will be verified simultaneously.  Running too many VM Screenshot Verification tests at once can produce a heavy load on your backup repositories, causing major delays in boot time depending on your hardware configuration – it’s best to tune this to your liking.    Also configurable here are things like RTO, which in this case defines the number of minutes that the VM has to fully boot and initialize VMware tools.  If this time is exceeded, the VM will be labeled as failed and the placeholder VM is discarded.  We can also set the delay between when the guest OS has booted, and the actual execution of the screenshot.


Honestly, this is all we need to do!  Simply save your job and on your next job run Screenshot verification should take place.  As shown below, we can see the events that take place within vCenter during a Screenshot verification test, along with the placeholder VM that is created in order to perform these tests, noting the creation and deletion of the VM, along with any required iSCSI setup.  This is all automated by Nakivo and requires no manual setup on your part.


So we have now seen that the Screenshot verification has been executed, but what does it look like in one of the reports/emails.  Right-clicking any job within Nakivo gives us the ability to run a few reports – the one we are most interested in now is the ‘Last run report’.  After generating and opening the ‘Last run report’ for our job with screenshot verification enabled we should see new information included in the report.  As shown below we see that we have a ‘Last verification’ row now, indicating whether or not that the screenshot verification was successful – in addition, we can also see the actual screenshot that was taken by Nakivo.  Below we see the actual login screen, giving us a pretty good indication that if we were to restore from this backup we would be successful.


Hey, Let’s have some fun!

As you can see, Screenshot verification is a very valuable tool giving us that piece of mind that our backups are actually restorable.  But where’s the fun in that right?  Let’s break some stuff and see how Screenshot verification reacts….

So, on my production VM let’s mimic some corruption and see if we can’t get Nakivo to detect it before we do!  In order to do this I’ve run the following commands on my production VM within an administrative console (***NOTE*** Don’t do this in production, please, please don’t do this in production Smile)

takeown /F C:\Windows\System32\WinLoad.exe

cacls C:\Windows\System32\WinLoad.exe /G administrator:F

del C:\Windows\System32\WinLoad.exe

bcdedit /set recoveryenabled No

The first three lines are pretty self explanatory, taking ownership, assigning rights, and deleting WinLoad.exe – the file that actually executes the loading of Windows upon boot.  The last line simply disables the automatic repair, Microsoft’s line of defense for preventing people from doing stupid things like this Smile  Anyways, we’ve essentially botched our server here, however we won’t notice until we do a reboot, something that probably doesn’t happen that frequently in a production environment – thus, it’s probably going to go unnoticed for quite some time – that is, unless we are utilizing Nakivo’s screenshot verification on our backup jobs.

nakivoss-log Let’s go ahead and run our backup job again on this same VM.  This time, we will see Nakivo report a failure on the backup job, specifying that screenshot verification has failed – upon further investigation, we can see below what appears on the console of our VM that used for the verification, and is exactly what would happen to our production VM if we were to reboot it!   Even though our newly created backup is not restorable, at least we now know and it won’t be a surprise to us in an emergency situation like the previous scenario.  This gives us time – time to come up with a plan, whether that be restoring from a known good backup, coming up with some sort of failover plan or even building a new server.


So in the end screenshot verification proves to be a very valuable tool in any backup administrators belt – whether that being knowing that your backups can be restored successfully, or sometimes even more important, knowing that they can’t – and in some cases, Screenshot verification can be leveraged to prevent production outages by getting a preview of things to come upon the next reboot!  The Flash VM Boot technology makes Screenshot verification a no-brainer in my opinion.  If you are using Nakivo, you should be enabling this on all of your mission critical VMs.  To learn more about Screenshot verification and other Nakivo features check out their help center here.  Fancy trying it for yourself?  You can get a full featured trial here, or if you are a VMUG member, VCP, or vExpert why not grab a free NFR license to tinker with!  If that isn’t enough options for you Nakivo also offers a fully featured free edition – yes, all of the same features of their premium paid versions, just limited to a couple VMs.  Thanks for reading!

Nakivo Instant Object Recovery for Microsoft Active Directory

nakivoNakivo, a backup company based out of Silicon Valley has been providing backup and replication software to the world since late 2012.  Today we will not focus so much and getting Nakivo up and running, we’ve already done that thoroughly here, but instead we will take a look at one individual feature; Instant Object Level Recovery for Microsoft Active Directory.  Let’s face it – mistakes happen – users get deleted, OU’s get wiped out, security groups get completely out of sync.  This is all stuff that happens, and happens more often than we know it.  Certainly performing a complete full restore of a domain controller can be a little bit over the top just to get one individual user back (depending on who it is I suppose Smile ), which is why Nakivo has been providing a means for restoring these individual Active Directory objects since their 5.5 release back in March of 2015.  Today we will take a more in-depth look at just how we perform these restorations.    Rather than simply showing how things are done I thought I’d have a little more fun with it this go around, put a little story behind it for all of our enjoyment 🙂  With that said, let’s dive in!

The Scenario

Let’s paint the scene – you are a sysadmin working for a pretty famous hockey club based out of Montreal.  You are using Nakivo to protect a couple of datacenters, one in Montreal and another in Brossard, with a fully virtualized Active Directory.    One morning for whatever reason your supervisor was a little off his game – maybe it was too much wine the night before, or perhaps he had a heaping of bad poutine at lunch, but when asked to disable and enable certain players directory accounts after a blockbuster trade, he had a slip up.  Sure, he disabled the “psubban” account of the outgoing player as he was asked to, however in the process of creating the new “swebber” account, somehow he ended up deleting Andrei Markov’s account (amarkov).

It wasn’t until Andrei showed up for practice that morning that anyone noticed – Andrei attempted to log in and quickly realized that something was up.   When the helpdesk ticket finally made its way to to your supervisors desk he knew immediately what had happened and quickly called upon you to help out.  “No worries”, you said, “We’re protecting that server with Nakivo!”

How can Nakivo help you?

Thankfully you had already setup a backup job which processes a domain controller belonging to the canadiens.local domain, the same domain the user was accidentally deleted from.  We won’t go into the nitty-gritty details of how to setup the backup job here, as this post focuses solely on the recovery, but we have covered it in detail in another post if you’d like to check it out.  But instead we’ll go through the steps for us to restore Andrei’s account – The first thing we need to do is fire up his browser and log into Nakivo Backup and Replication.  After logging into the application, simply selecting ‘Microsoft Active Directory objects’ under the ‘Recover’ menu kicks off the process (shown below).


The next step is quite simply and pretty self explanatory – we simply need to select the backup of our domain controller, in our case its named MSDC, and then select a desired restore point to restore from.  As shown below we also have the option to ‘Automatically locate application databases’, which is checked by default.  If we happened to know the exact location of the MS AD database then we could uncheck this an specify the path, and in turn maybe save a little time as Nakivo wouldn’t need to scan for the ntis.dit file.  Honestly though, the amount of time it takes Nakivo to locate the Active Directory database is trivial, so let’s leave this checked, and click ‘Next’.


Nakivo will now take a moment to load the desired restore point and display it to us.  The amount of time this takes greatly depends on the size of your Active Directory infrastructure.  Canadiens.local is relatively small, and took only a few seconds to load – but before we move on to the next step it’s good to go over what is happening behind the scenes here.  Nakivo Backup & Replication is actually scanning and mounting the server directly from within the compressed and deduplicated backup file – at no time does it perform a full recovery of the VM itself, saving us valuable time as we only need to restore that one individual object.  As shown below we are presented with a screen on which we can browse through the entire Active Directory infrastructure and find the object we’d like to restore.  It should be noted here that Nakivo supports object-level recovery for not just users, but containers and groups as well – so if it was an Organization Unit or Security Group that was deleted we would be able to restore it in the same manner.   Next we select the object by simply clicking the checkbox beside it, and then click ‘Download Selected’.  Alternatively we could click ‘Forward Selected’ to have Nakivo email out the ldif files to be used for import.    At this point we will have a couple or Recovery settings we can specify; User will be disabled – will restore the user with the account disabled or User must change password at next logon – Nakivo automatically generates a new password for the restored user, and sets the ‘Change password on next logon’ flag in AD.  Any password Nakivo generates will be stored in an included ‘Passwords.txt’ file added to our download.


After downloading the recovery bundle (should come in a .zip format) we can now get started on restoring Andrei Markov’s account back into the canadiens.local domain.  We does this by first extracting the bundle and copying the extracted folder back to his domain controller.  Since we are importing a user object back into Active Directory we need to have ldaps, or certification services enabled and configured on the domain controller.  Thankfully the canadiens.local domain is already setup this way, however if we need to implement ldaps there is a great post here on how to go about it.  Once we are back on the domain controller console we can simply open up an administrative command prompt and run the following command…

ldifde –I –t 636 –f filename –j logfolder <- where filename is the path the the downloaded ldif from Nakivo and logfolder is a path for import logs to be placed.

We can see a screenshot below of the before and after shots of the canadiens.local domain, with the after showing that Andrei Markov’s account has indeed been restored.

nakivoad-before nakivoad-after

With that you can now breathe easy as Andrei’s account is fully restored back into Active Directory, including all of his user attributes, group memberships, etc.  Honestly, it’s as if it was never deleted!  This whole process moves very quickly within Nakivo, honestly, within minutes – and when the time comes where you need to do a restore, especially one revolving around user access, time is most certainly of the essence.  Nakivo could certainly shave even more time off this process by implementing some way to automate the ldif import, or import directly back into the production VM – but honestly, the simplicity of this whole process far outshines the fact that it needs to be manually imported.  For now, you and your supervisor can get back to what matters most; the quest for Lord Stanley.

If you would like to learn more about Nakivo’s Instant Object Recovery for Active Directory or any other feature they offer I highly recommend checking out their help center here, where you can find items such as their knowledge base, release notes, and a very well written user guide.  Also if you want to check it out for yourself you can get a full featured trial here, or if you are a VMUG member, VCP, or vExpert why not grab a free NFR license to tinker with!  If that isn’t enough options for you Nakivo also offers a fully featured free edition – yes, all of the same features of their premium paid versions, just limited to a couple VMs.  Thanks for reading!

Cohesity 3.0 – One platform for all your secondary storage!

logo-cohesity-darkAfter just over half a year of making their 1.0 product generally available Cohesity, a company based out of Santa Clara have announced version 3.0 of their flagship secondary storage products DataProtect and DataPlatform.  I had the chance to take a 1:1 briefing with Cohesity to check out what’s new and find out just what they define secondary storage as and thought I’d try and share my thoughts around the new features and overall solution from Cohesity here…

What is secondary storage?

Before we get too in-depth around the features and benefits of the Cohesity platforms its nice to stop and take a look at just what secondary storage is.  Quite simply, Cohesity sees secondary storage as any storage hosting data that isn’t “mission critical”, and surprisingly they are also discovering that this non “mission critical” data takes up the majority of an organizations overall capacity.  As show below we can see that data such as backups, test/dev, file shares, etc.…  These all fit into the secondary storage profile – data that is rarely used, fragmented and complex to manage, data that Cohesity defines as “Dark Data”


All of this “Dark Data” can become a bit of a challenge to manage and maintain – We end up with numerous backups that we don’t touch, we have many appliances and servers within our datacenter performing various functions such as deduplication, compression, analytics, etc.  All of these moving pieces within our datacenter each come with their own cost, their own hardware footprint, and for the most part have no way of interfacing with each other, nor do they have the ability to scale all together.  This is where Cohesity makes it’s play – simplifying secondary storage within your datacenter

Cohesity – All your secondary storage – One Hyperconverged platform

Cohesity moves into the datacenter and aims to eliminate all of those secondary storage silos.  They do this by consolidating your backups, file shares, test/dev copies, etc. and moving them all on to a Cohesity appliance.  To get the data there, Cohesity first leverages their DataProtect platform.  DataProtect provides the means of backup, using seamless integration into your vSphere environment Cohesity starts performing the role of your backup infrastructure.  Utilizing user create polices based on SLA requirements, Cohesity begins on loading your backup data, adhering to specified RPOs, retention policies etc.  From there DataProtect also adds the ability to offload to cloud for archival purposes. Think in terms of offloading certain restore points or aged backup files to Amazon, Azure, or Google.   Once the data resides on a Cohesity appliance a number of benefits are presented to their customers; think analytics, being able to get a Google-like search throughout all of your secondary data, looking for pre-defined templates such as social security numbers or credit card numbers.  DataPlatform also provides the ability to leverage copy data management to quickly spark up exact, isolated copies of our production environment directly on the Cohesity appliance.  This allows for things such as patch management testing, application testing, or development environments to be deployed in a matter of minutes utilizing flash-accelerated technologies on the appliance itself.


Integrating all of these services into one common platform for sure has its benefits – lowering TCO for one, not having to pony up for support and licensing for 4 different platforms is the first thing that comes to mind.  But beyond that it provides savings in terms of OpEx as well – no more do we have to learn how to operate and configure different pieces of software within our environment dealing with our secondary storage.  No more do we have to spend the time copying data between solutions in order to perform various functions and analytics on it.  We can just use one appliance to do it all, scaling as we need by adding nodes into the cluster, and in turn, receiving more compute, memory, and storage capacity, thus increasing performance of the secondary storage environment overall.

So what’s new in 3.0?

As I mentioned before this is Cohesity’s third release in just over half a year.   We saw 1.0 GA in October of 2015, 2.0 not long after that added replication, cloning and SMB support in February of this year, and now we have 3.0 hitting the shelves with the following improvements and features…

  • Physical Windows/Linux Support – perhaps the biggest feature within 3.0 is the ability to now protect our physical Windows and Linux servers with DataProtect.  The same policy based engine can now process those physical servers we have in our environment and allow us to leverage all of the analytics and search capabilities on the data that we have always had.
  • VMware SQL/Exchange/SharePoint Support – As we all know in the world of IT it’s really the application that matters.  3.0 provides the ability to perform application aware backups on our virtualized SQL, Exchange, and SharePoint servers in order to ensure we are getting consistent and reliable backups, which can be restored to any point-in-time, or restoration of individual application objects as well.  3.0 also adds the ability to provide source-side deduplication for these application-aware backups, meaning only unique blocks of data are transferred into the Cohesity platform during a database backup.
  • Search and recovery from Cloud – 3.0 also brings us the ability to perform search capabilities on our data that has been archived to cloud, but more importantly, perform granular object level recovery on that cloud archived data as well.  Meaning the cost of moving data out of the cloud should decrease as we are just moving the data we need.
  • Performance Enhancements – Utilizing a technology based upon parallel ingest, Cohesity can now spread the load of ingesting individual VMs across all the nodes within its’ cluster – resulting in not only a capacity increase when you scale, but also a performance increase.  Also, they have done much work around their file access services, basically doubling the amount of IOPs and throughput.

And to top it all off, Best of VMworld


A huge congrats to Cohesity on the announcement revolving around 3.0 but an even huger congrats goes out for the “Best of VMworld 2016” within the Data Protection Category!  If you want to learn more I definitely recommend checking out Cohesity’s  site here, or, if you happen to be at VMworld you have a couple more days to drop in and say Hi at booth #827!