Tag Archives: Backup

Rubrik Firefly – Now with physical, edge, and moar cloud!

Rubrik LogoRubrik, the Palo Alto based company who strives to simplify data protection within the enterprise has recently announced a series C worth a cool 61 million, doubling their total capital to a cool 112 million since founding just over a couple of years ago!  And as much as I love to hear about venture capital and money and whatnot I’m much more into the tech as I’m sure my readers are as well!  With that, alongside that Series C announcement comes a new release of their product, dubbed Rubrik Firefly!

Rubrik Firefly – A Cloud Data Management Platform

With this third major release from Rubrik comes a bit of a rebrand if you will – a cloud data management platform.  Nearly all organizations today have some sort of cloud play in their business; whether that be to build out a private cloud and support legacy applications or consume public cloud resources for cloud native applications – they all have some kind of initiative within their business that aligns with cloud.  The problem Rubrik sees here is that the data management and data protection solutions running within those business simply don’t scale to match what the cloud offers.  Simply put,  customers need to be able to manage, secure, and protect their data no matter where it sits – onsite, offsite, cloud, no matter what stage of cloud they are at – thus spawning the Cloud Data Management Platform

Rubrik Firefly Cloud Data Management

So what’s new?

Aside from a number of improvements and enhancements Rubrik Firefly brings a few big new features to the table; Physical Workloads, Edge Environments, and spanning across clouds.  Let’s take a look at each in turn…

Physical Workloads

I had a chance to see Rubrik a way back at Virtualization Field Day 5 where we got a sneak peek at their roadmap – at the time they supported vSphere only and had no immediate plans for physical workloads.  The next time they showed up at Tech Field Day 10 they actually had a bit of a tech preview of their support for physical MSSQL support – and today that has become a reality.  As you can see they are moving very fast with development of some of these features!   Rubrik Firefly adds official support for those physical SQL servers that you have in your environment, you know, the ones that take up so much resources that the DBA’s just will not let you virtualize.  Rubrik can now back these up in an automated, forever incremental fashion and give you same easy of use, efficiency, and policy based environment that you have within your virtual workload backups.  Firefly does this by deploying a lightweight Windows service, the Rubrik Connector Service onto your SQL server, allowing you to perform point in time restores and log processing through the same UI you’ve come to know with Rubrik.   Aside from deploying the service everything else is exactly the same – we still have SLA policy engine, SLA domains, etc.

And they don’t stop at just SQL!  Rubrik Firefly offers the same type of support for those physical Linux workloads you have lying around.  Linux is connected into Rubrik through  an rpm package, allowing for ease of deployment – From there Rubrik pulls in a list of files and directories on the machine, and again, provides the same policy based approach as to what to back up, when to back it up, and where to store it!

Both the SQL msi installer and the Linux rpm packaged are fingerprinted to the Rubrik cluster that creates them – allowing you to ensure you are only processing backups from the boxes you allow.

Edge Support

Although Rubrik is shipped as a physical appliance we all know that this is a software based world – and that doesn’t change with Rubrik.  The real value in Rubrik is the way the software works!  Rubrik has taken their software and bundled it up into a virtual appliance aimed for Remote/Branch Offices.  What this does is allow those enterprises with remote or branch offices to deploy a Rubrik instance at each location, all talking back to the mothership if you will at the main office.    This allows for the same policy based approach to be applied to those workloads running at the remote locations, thus allowing things such as replication back to the main office, archive to cloud, etc to be performed on the edge of the business along with at the main office.  The Virtual Appliance is bundled as an ova and sold on a “# of VMs” protected basis – so if you have only a handful of VMs to protect you aren’t paying through the nose to get that protection.

Cloud Spanning

Finally we come to cloud spanning.  Rubrik has always supported AWS as a target for archiving backups and brought us an easy to use efficient way of getting just the pieces of data we need back from AWS – but, we all know that Microsoft has been pushing Azure quite heavily as of late handing out lots and lots of credits!  You can now take those spare credits and put them to good use as Firefly brings in support for Azure blob storage!  The same searching and indexing technology that Rubrik has for Amazon can now be applied to Azure as well, giving customers options as to where they archive their data!

Bonus Feature – Erasure Coding

How about one more?  With the Firefly release Rubrik now utilizes erasure coding, bringing in a number of performance and capacity enhancements to their customers with a simple software upgrade!  Without putting hard numbers to it customers can expect to see a big increase in their free capacity once they perform the non-disruptive switch over to erasure coding!

Firefly seems like a great step towards the cloud data management platform – a topology agnostic approach to wrapping policy around your data, no matter where it is, ensuring it’s protected and secured!  The release of a Virtual Appliance perks my ears up as well – although it’s aimed directly at ROBO deployments now who knows where it might go in the future – perhaps we will see a software-only release of Rubrik someday?!?   If you are interested in learning more Rubrik has a ton of resources on their site – I encourage you to check them out for yourself.  Congratulations Rubrik on the Series C and the new release!

Nakivo – Backup and Replication for your VMs – A review!

nakivoLet’s face it – backup software is not the most exciting thing for a CIO in today’s world.  I mean, 99% of the time it sits idle, backing things up, spewing out reports – for the most part its somewhat of money sinkhole in an environment – but when push comes to shove and someone has deleted that important email, or that mission critical server fails – when a recovery or restore option takes place a piece of backup software can make or break a business!   Whether you are a simple SMB or a large enterprise backup could almost be classified as one of the most important things to your organization – so it has to be easy, intuitive, and reliable!   Nakivo, with their flagship Backup & Replication has taken that exact approach when developing their software!  Nakivo, headquartered in the infamous Silicon Valley was founded just in 2012 and after 4 fast-moving years have just released version 6.1 of their product.  This is one piece of software I have been hearing a lot about, but never had the chance to check out.  With that said I grabbed an NFR key from them and put it in the lab – and here are my thoughts.

Disclaimer: This review is sponsored, meaning I did receive compensation in some sort of form for writing this! That said, as always, any review I post on my site is solely my words and my opinion and in no way was modified or changed by the vendor!



Before we dive directly into the installation its best to first explain a little around Nakivo’s architecture.  Nakivo is really broken down into three main components; a Director, a Transporter, and a Backup Repository.

image2014-12-25 7-35-33

The Director

We can think of the Director as somewhat of a management plane for Nakivo – providing the user interface we log into and maintaining lists of our virtual infrastructure.  It also handles the creation, configuration, and scheduling of our backup job environment.  We only need one instance of the Director as it can handle multiple vCenters and standalone ESXi hosts.

The Transporter

The next component, the Transporter is our heavy lifter.  The Transporter is the data mover per say, performing all of the backup, replication and recovery operations as it receives its respective instructions from the Director.  The transporter also handles features such as compression, encryption and deduplication.  When we install a Director we will automatically get one “Onboard Transporter” installed on the same machine by default which cannot be removed.  That said, as we find we are processing simultaneous VMs and processes at once we can scale our backup environment by adding additional standalone transporters to help with the lifting!  As we do so, we also get network acceleration and encryption between transporters as data is passed back and forth.  Finally we have the Backup Repository.

The Backup Repository

This one is pretty self explanatory in the backup world.  It’s a container or pool of storage to hold our backups.  This can be a CIFS share or simply any local folder or storage attached to a transporter.  Again, when we initially install our Director we also get an “Onboard Backup Repository” to use by default.


imageAlright, with a little background knowledge behind us it’s time to get Nakivo deployed and wow, talk about some options!!!!  Deploying Nakivo Backup & Replication should satisfy just about every environment out there!  If you are primarily a Windows shop, simply use the Windows installer – Does your environment mainly consist of Linux-based distributions – hey, simply install the Linux package!  Or, do you prefer the ease of simply deploying appliances – they have you covered there as well with ova based virtual appliances!    Keep in mind that it doesn’t matter which installation method you chose – in the end you are left with the same product.  For the sake of this review I’ve chosen what I think might be the most common installation method – the Windows-based install.

So on with the install!  I’ve chosen the “Full solution” option as my installation type – meaning I will get an all-in-one install of a Director, Transporter and Backup Repository on the same machine!  Certainly this might not be ideal for a production environment, but suffices in the case of my lab.  As you can also see , the first screen allows me to specify where exactly I’d like to create the repository as well.

One click later…


Wait what!?!?!  Yeah – one click!  One click and we are done with the Windows installation of Nakivo Backup & Replication! As for the other installation types they are just as easy – Linux requires the execution of a single command, and we all know how simple deploying a virtual appliance is!  If you are looking to protect an Amazon instance, a simple link to a deployable AMI is provided as well!


Time to start configuration the product now!  Just a note, I really dig the earth/space image that is displayed by default in the UI.  It’s kind of a nice break from the standard box type login screens you see in most products.


Upon first launching Nakivo you will be prompted to set up a username and password.   After doing so you will be brought into their Configuration wizard and as you can see below only they only require three types of information; Inventory, Transporters and Repositories – This wizard, along with many others within Nakivo are short and to the point – and clearly make sense in the simplest terms – think, What to back up, how to move it, and where to put it – easy right?


As far as Inventory and VMware goes we just need to point Nakivo to our vCenter Server and provide it some proper credentials – from there the product goes out and discovers our inventory and allows us to add it into Nakivo Backup & Replication.


The Transporter section allows us to add/import any existing Transporters we may have already installed in our environment – be them on vSphere or Amazon AWS if we chose to do so.  As we mentioned earlier this review will simply use the “Onboard transporter” that is installed by default.


Lastly we can set up any Backup Repositories we want to have within our backup environment – again, I’m sticking with the default “Onboard repository” we setup during the installation, but if need be we can create new or import existing repositories into Nakivo during this step.

Once we are done we are brought into the Nakivo Management UI where we can begin creating jobs and backing up our environment – but before we go to far there are some other configurable options we change that weren’t included in the initial bare-bones wizard.


I’m not going to go through all of the configurable options but I’ll highlight a few common settings normally setup within environments as well as some very “nice to have’s” that Nakivo includes…

  • General->Email Settings – here we set up our SMTP options in order to have Nakivo send out alerts and reports.
  • General->Branding Settings – as mentioned earlier we have complete control over modifying the look and feel of Nakivo, uploading our own logos and backgrounds as well as support and contact information
  • General->System Settings – This allows us to specify how long we store job history and system events, as well as setup any regional options we prefer such as week start days, etc.
  • Inventory – Here we can add multiple vCenter/ESXi hosts as well as AWS environments
  • Transports/Repositories – Again, this is where we can manage or add any new transports or repositories to the system.
  • Licensing – Handles the changing of licenses for the product.


So on to the job setup

imageNow that we have Nakivo configured it’s time to start creating some jobs and see just how the product performs.  From the main dashboard we can do this by simply clicking the Create button.  As you can see to the left we have a variety of different jobs we can create, and depending on what you have set up within your inventory some may be unavailable to us.  For instance I don’t have an Amazon account attached to my instance of Nakivo so I’m unable to create a job to back up or replicate EC2 VMs.  That said we did add our vCenter into our Inventory so let’s go ahead and select ‘VMware vSphere backup job’ to get started…


As you can see above, the vSphere backup job creation is again in a wizard type format, firstly requiring us to select just what VMs we would like to process with this job.  We do this by either browsing through the inventory presented, or filtering with the search box provided, then checking the box next to our VMs we’d like to back up.  We can also select parent objects here as well, such a host, cluster, or vCenter, which would in turn backup all VMs residing within the parent.  This is useful in the event you want to capture and newly created VMs in the environment without having to modify existing jobs every time.  If selecting multiple VMs during this stage you can drag them around within the right hand pane in order to set priority preferences for processing – ensuring certain VMs are backed up before others, for now I’ve selected just my Scoreboard VM.


The second step deals with repository selection – we’ve already selected what we want to back up, now it’s time to say where to back up to.  Selecting ‘Advanced’ and expanding out our VMs we can see that we can globally select a repository for the job, yet perform overrides as well on a per-vm per-disk basis –giving us the granularity to place certain VM disks on certain repositories if we chose to do so.


Thirdly we setup the job schedule, with shortcuts to all days, workdays, weekends etc which can change depending on our regional settings we have setup within the system.


Lastly we setup our job options.  It is here where we give the job a name, select our retention cycles for the job and execute any pre/post job scripts we might want to kick off – all of the standard features you expect from a backup solution – but there are some additional options available here as well we should have a look at…

  • App-aware mode – instructs VMware tools to quiese the VM before backing up, allowing applications to ensure they are in a consistent state.
  • Change Tracking – This is a common feature provided by VMware that allows backup application to process just those blocks that have changed since previous backups, speeding up the time it takes to create an incremental backup.  Here we can select to use either the VMware version (preferred) or Nakivo’s proprietary version (available if no other CBT exists).
  • Network Acceleration –  if backing up over a WAN or slow LAN links this option will leverage compression and other reduction techniques to speed up data transfer
  • Encryption – this option will encrypt data that flows between transports.  Since we have only one transporter, this option is not available to us.
  • Screenshot Verification – This option will use a Nakivo technology called Flash VM Boot (we will cover this later) that will automatically recover our backups in an isolated manner and take a screenshot of the VM for inclusion in the Job Reports and notifications.
  • Recovery Points – here we can specify how many daily, weekly, monthly, and yearly recovery points we would like to maintain.
  • Data Transfer – Allows us to specify how Nakivo gets to the source data (Hot Add – mounts VM disks to the transporters, SAN – retrieves data directly from a FC or iSCSI SAN lun, or LAN – network access to the data).  We can also specify which transporters we would like to use for the job here if we had multiple transporters on different networks, clusters, etc.



After clicking ‘Finish’ we can now see that our ‘Run Job’ tab in the dashboard is active and displays our newly created job.    As we can see above our new job is indeed running, with the status being updated in the Job Info section of the dashboard.  I really like the way Nakivo has displayed this data.  We can see everything we need to know about any given job, as well as it’s run status, resource usage on any transporters its utilizing and the events and job status on all on dashboard.  When the initialization of the job is complete, the UI switches to different view showing the speed and data transferred – A very intuitive design for a UI.  The only thing I’d love to see here is the ability to break this information out into another window without having to open a new tab.

But it’s Nakivo Backup AND REPLICATION

Now that we have successfully backed up our Scoreboard VM its time to have a look at replication.  The process for creating a replication job is similar to that of a backup, simply click ‘Create’, and select ‘VMware vSphere Replication Job’.  Again, we are presented with a similar 4 step wizard.  Step 1 we select which VMs we wish to replicate, again with the options of selecting parent containers.


Step 2, as shown above, presents us with some different options than that of a backup.  Since we are replicating VMs, they will be stored in their native VMware format, therefore, instead of selecting a repository as a target we need to select another ESXi host.  As you can see above I’ve simply selected to replicate my ConcessionStandPOS VM from it’s location in Montreal, to another ESXi host located in Brossard for DR purposes.  Again, Step 3 allows us to create a schedule for the replication to occur with the exact same options as that of a backup job.


Step 4, shown above, is similar to that of the backup job options with a few added options.  We still have the ability to select our transporters and transport mode, as well as set recovery point retention settings, and finally perform the Screenshot verification as well,  however we have a few new options to configure outlined below

  • Replica Names – append/prepend a string to our VM name for the replica, or allows us to specify individual names on a per-vm basis
  • Replica Disks – Allows to specify to maintain consistency in terms of disk type for the replica, or specify that replicas are only stored thin-provisioned.

Once we click Finish again we will see our newly created job on our dashboard.  One item of interest here is that by default Nakivo doesn’t group our jobs, meaning backup and replication jobs are intermixed together.  They are distinguishable by the small icon next to them but if you want to further distinguish between the two visually we can click ‘Create’ and then ‘Job Group’.  This essentially creates a folder that we can drag and drop our jobs in and out of, allowing us to create a Backup Job Group and a Replication Job Group.  Job Groups also allow us to perform bulk operations on all jobs within that group, such as starting, stopping, disabling and enabling, etc…

When it really matters…

imageWe can do all of the backing up and replicating we want, but when push comes to shove we all know that it’s the recovery that matters most!  All recovery within Nakivo is done on the ‘Recover’ menu in the main dashboard.  As you can see to the left we have a variety of options when it comes to recovery in Nakivo, with each explained below…

Individual Files

This allows us to recover individual files from a VM backup within Nakivo.  After selecting our backup and then a desired restore point or point in time to restore to, Nakivo will mount the deduplicated, compressed backup file to its Director interface.  In the end we are presented with file browse dialog box, allowing us to select individual files, folders, partitions, and drives.   From there we have the options of either downloading these files directly to our Nakivo server, or whatever client you happen to be running the Nakivo UI on, or forwarding them via email.

Microsoft Active Directory Objects

user_iconActive Directory objects are treated somewhat the same as a file level recovery.  The backups are mounted in their compressed and deduplicated state to the Nakivo server.  From there you can browse or search for individual objects and recover them directly to your client machine.  The AD objects are downloaded in a .LDIF format, which allows for easy importing directly back into Active Directory.

Microsoft Exchange Objects

emailSimilar to that of Active Directory objects Nakivo can restore Microsoft Exchange items as well.  With this, we have the ability to search for and recover items such as emails, folders, mailboxes, etc.  The items are downloaded to the client machine, or alternatively forwarded via email to an address of your choosing.

VMs from backup

2185EN_01_02If you need to restore an entire VM this is the option you would most likely chose.  Nakivo allows you to restore a complete VM from a backup file – at which point it extracts the data from the deduplicated, compressed backup file and re-registers a VM on a host of your chosing, either preserving the VMs UUID, or creating a new one.  Just as in replication, we are able to restore the VM to it’s original disk type, or force it to be thin-provisioned.  We can also specify whether we would like our recovered VMs powered on, and whether or not we would like to change or preserve the MAC address on the recovered VM.

VMs from replica

vm-replicaFailing over to a replica within Nakivo is a very easy process.  Essentially you simply select which VM you would like to fail-over, select a time frame in which you want to fail-over to and run the job – after that, Nakivo simply places the replica in the correct point in time snapshot and powers it on.  When completed you are left with an absolute copy of your VM, recovered almost immediately.

Flash VM Boot

image2015-1-15 9-52-14Flash VM boot is a technology that allows us to power on our VM backups directly from their compressed and deduplicated state.  Rather than taking the time to restore the data as we did in the ‘VMs from backup’ scenario we can simply boot a VM directly from its backup files.  Nakivo does this by first creating a new VM on a target ESXi host, then exposing the VMs disks within the backup as iSCSI targets, and mounting them directly to the newly created VM as a virtual RDM.  Before any mounting though a snapshot is created, which will essentially redirect any changes that may take place during the flash VM boot, providing a means of discarding them later in order to preserve the integrity of the backups.  This is the technology that enables the ‘Screenshot verification’ options within the backup jobs, allowing us to ensure that our backups will indeed boot up when it really matters.   Once the VMs have booted you can permanently recover them by utilizing VMware Storage vMotion to move and migrate the RDMs to vmdks, or, subsequently if you aren’t licensed for svmotion you can create a new replication job within Nakivo to replicate the VM to another host.

So whats the verdict?

Nakivo is certainly a very easy product to use and get used to – having the management interface run through a web browser is certainly an advantage – being able to launch to management interface from any workstation without installing a client!  Also, the UI is very intuitive and very clean, which is surprising because they cram a lot of information into those screens – but everything is super easy to find.  Creating backup and replication jobs is a breeze, simply launching 4 step wizards from start to finish!  As for performance I can’t complain either, all of my jobs finished in a timely manner, mind you my test VMs are quite small with very little change rate, but needless to say performance was fine.  Nakivo is architected in a way that is simple to get up and running very quickly, yet, also simple to scale with a growing environment by adding more transporters and repositories.  I really like the options you have when deploying Nakivo, be it physical or virtual or cloud, Windows, Linux, virtual appliance or even on a NAS such as Synology– Nakivo leaves the choice to you!   The deduplication technology is outstanding – and coupled with the compression they offer you can be sure that you are using as little capacity as needed and not storing redundant data or wasting space.  I would however like to see the product expanded in the future to include a couple of features that I couldn’t find – Firstly it would be nice to see Nakivo bake in the ability to restore individual files and application items directly back into their source VMs without having to download locally, as well, even though I don’t use it, Hyper-V support seems to always come last on backup vendors lists – hopefully we see this supported sometime soon as well.  I should mention that even though this review focused solely on VMware, Nakivo is fully supported to protect your instances an Amazon as well – giving you a feature rich backup and replication options to move data between regions without utilizing snapshots. Also, there are a slew of multi-tenancy options that I didn’t have time to explore, as well as the ability to perform copies of your backups offsite or to the cloud.   As far as licencing goes Nakivo is licensed on a per-socket basis, and honestly, starting at $199/socket for VMware, and $49/month for AWS you are going to be hard up to find a product with all of these features at a lower price point!

With all this said would I recommend Nakivo – certainly!   It’s easy, intuitive, it performs and its priced right!  But as always, don’t necessarily take my word for it!  If you want to try out Nakivo for yourself you can – If you are  VMUG member, vExpert, VCP, VSP, VTSP, or VCI you can get your hands on a free, full-featured two-socket NFR key yourself!  Nakivo also offers a full featured trial edition for 14 days to try the product out!  Still not enough for you?  Nakivo has a free edition – you can back up 2 VMs, performing all of the features above, for free, forever!  Again – options!!  An no excuse not to try it out!

Want to learn more about Nakivo

Check out some of these great resources!

As well as some other great community reviews of Nakivo


Free NFR from Altaro


Over the last number of years we have seen a lot of virtualization companies showing their appreciation for industry influencers by offering up free NFR licenses for their products. Now we can add one more to that list – Altaro Software. Altaro has been a supporter of my blog for a little while now and with all my sponsors I try and help spread the word about any initiatives or giveaways they have that I find particularly interesting, and think that my readers may benefit from.

freeSo, without further ado, if you are a VMware vExpert or a Microsoft MVP Altaro has a gift for you! In appreciation for the work you do within the virtualization industry Altaro is offering up an NFR key of their flagship software Altaro VM Backup at no cost. Just follow this link to fill in the form to grab yours. Keep in mind, this a full featured NFR license of their unlimited edition – meaning you can backup as many VMs as often as you like. I went through the installation and configuration process of Altaro in my introductory post a ways back and found it very intuitive with a nice UI – certainly a product worth checking out! For some more in depth writings around Altaro may I suggest Vladan Segets series around the product.

With that, Thank you Altaro for your support in the community and happy backuping everyone!

Using PowerShell to mass configure the new Veeam v9 features

veeamv9Veeam v9 is here and if you have already performed the upgrade you might be a bit anxious to start using some of the new features that came along with it.  In my case I’ve already gone ahead and done my due dilligence by enabling and configuring some of the new features on a few test backup/replication jobs and I’m ready to duplicate this to the rest of the environment – problem being I have A LOT of jobs to apply these to.  As always I look to automation to solve this issue for me.  One, it is way faster and two, it provides a consistent set of configuration (or errors) accross my jobs – making it far more easier to troubleshoot and change if need be.  Thanfully Veeam provides a set of PowerShell cmdlets that allows me to automate the configuration of some of these features.  So, if you are ready to go let’s have a look at a few of the new features within Veeam v9 and their corresponding PowerShell cmdlets.

Just a note – for each of these examples I’ve just posted the code to handle one object – but you could easily surround the blocks of code with a foreach() if you are looking to apply the configurations to many objects.  <- this is what I have done however it’s easier and much easier to read if I just insert code dealing with individual objects.

Enabling Per-VM file chains

First up is the Per-VM Backup File Chain introduced in v9.  In previous version of Veeam all of the VMs contained within a single job were also contained within single backup files – In the end we were left with some massive backup files sitting on our repositories.  Having a massive file laying around isn’t such a big deal, but when the time came where we were required to manage or move that file in any way it presented a few problems – it took a long time to move and activity surrounding that file need be disabled until we were done.   In the end we were left with a lot of waiting and no backups.  The v9 Per-VM Backup File chain fixes this – it allows us to store our backup files on a per-vm basis, leaving them much easier to manage, not too mention the headaches that are saved if corruption on our backup files occur.  Either way I wanted to enable this on a dozen or so of my repositories…

I say repository since that is where the per-VM Backup Chain is enabled – not on the job, not on the VM, on the actual Veeam Repository.  The process of doing so is pretty simple, get our repository, set a flag to true, and call back the saveOptions() function – as follows…

$repo = Get-VBRBackupRepository -Name "Name of repository"
$repo.Options.OneBackupFilePerVm = $true

New Mount Server

In previous versions of Veeam before v9 certain restore operations required mounting backups to a Veeam backup server, which when dealing with remote sites could of resulted in increased bandwidth usage depending on how you had configured your environment.  v9 gives us the ability to designated any windows machine as a mount server.  The mount server can then be used as a mount point to perform file-level recovery operations, allowing the bandwidth to stay local at the remote site.

As with the Per-VM backup chains, mount servers are enabled on a repository level.  In my cases I wanted my repositories and mount servers to be one of the same – in order to do that I simply get the remote repository, then call Set-VBRBackupRepository passing it my mount host name and turning on the vPowerNFS flag as shown below…

$repo = Get-VBRBackupRepository -Name "Name of repository"
$repo | Set-VBRBackupRepository -MountHost (Get-VBRServer "Name of desired Mount Host") -EnableVPowerNFS

Guest Interaction Proxy

Another new ROBO enhancing feature in v9 is the ability to specify a guest interaction proxy.  Previously the Veeam Backup and Replication server handled deploying runtime processes into the VMs to facilitate different parts of the backup and replication jobs – in v9, we can now designate servers that may be onsite to do this – This helps in a couple of ways – first, this helps reduce traffic traversing our WAN and secondly, sometimes backup servers were isolated from the VMs they were backing up, prevening certain actions from even being able to take place.  Anyways, the Guest Interaction Proxy is a per-job setting and is setup within the VSS settings of the job.  In my cases I just needed to flip the AutoDetect to $true in order to get Veeam to select the proper GIP.

$job = Get-VBRJob -Name "Job Name"
$vssoptions = $job.GetVssOptions()
$vssoptions.GuestProxyAutoDetect = $True

Enable deleted file blocks

Veeam v9 has introduced many data reduction technologies within their application in order to help us save space and more efficiently manage all of our backup capacity.  The first technique we will look at is the ability to not backup deleted file blocks.  This can be enabled on your existing backup jobs by setting the DirtyBlocksNullingEnabled flag as follows.

$job = Get-VBRJob -Name "Job Name"
$joboptions = $job.getOptions()
$joboptions.ViSourceOptions.DirtyBlocksNullingEnabled = $True

Exluding certain folders/files

Another space saving feature inside of v9 is the ability to exclude or include certain files or folders contained with the VMs – think about Temp directories – under normal circumstances we don’t need them so why take up all that capacity backing them up.  We can set this up by first setting the BackupScope property – this can be set to exclude folders (ExcludeSpecifiedFolders), only include folders(IncludeSpecifiedFolders) or simply backup everything(Everything).  Depending on the setting of the BackupScope we then set the GuestFSExcludeOptions or the GuestFSIncludeOptions with an array of strings pointing to the desired folders – finally, saving our job options as follows…

$job = Get-VBRJob -Name "Job Name"
$jobobject = Get-VBRJobObject -Job $job -Name "VM Name"
$vssoptions = Get-VBRJobObjectVssOptions -ObjectInJob $jobobject
$vssoptions.GuestFSExcludeOptions.BackupScope = "ExcludeSpecifiedFolders"
$vssoptions.GuestFSExcludeOptions.ExcludeList = "C:\folder","D:\folder","c:\test\folder"

Storage-Level Corruption Guard on Production Backup Jobs (not just backup copy)

SureBackup does a great job at ensuring our VMs will boot however they may be certain portions of our data that can become corrupt that can actually pass a SureBackup test.  To help alleviate this, Veeam has introduced something called Storage-Level Corruption Guard (SLCG) to periodically identify and fix certain storage issues.  SLCG has actually been around in previous versions, but only available for Backup Copy jobs.  In v9 it can now be enabled on our production backup jobs, giving us one more piece of mind when the need to restore comes along.   This is enabled by first enabling the EnableRechek (yes, it’s spelled like that) flag, then setting a schedule (Daily/Monthly) and few other options and finally saving our options…  Below we’ve set a job up to perform SLCG on Fridays.

$job = Get-VBRJob -Name "Job Name"
$joboptions = $job.getOptions()
$joboptions.GenerationPolicy.EnableRechek = $True
$joboptions.GenerationPolicy.RecheckScheduleKind = "Daily"
$joboptions.GenerationPolicy.RecheckDays = "Friday"

Defragment and compact full backup file – on production backups not just backup copy

Over time our full backup files can become bloated and heavily fragmented – when we delete a VM for example, the full backup might still be holding onto certain data that was in that VM.  Normally we could take an active full backup in order to help purge this data, but as we all know that requires us to affect production and use up valuable resources.  To help alleviate this v9 has introduced the ability to defragment and compact our full backups on a schedule.  This is done very similar to that of SLGC, getting the VSS options of a job and setting the schedule.  Below we enable our defrag to run on Fridays.

$job = Get-VBRJob -Name "Job Name"
$joboptions = $job.getOptions()
$joboptions.GenerationPolicy.EnableCompactFull = $True
$joboptions.GenerationPolicy.CompactFullBackuScheduleKind = "Daily"
$joboptions.GenerationPolicy.CompactFullBackupDays= "Friday"

So there you have it – a little bit of automation for those that may have to update numerious jobs to fully take advantage of some of the features Veeam v9 has introduced.  As always please feel free to reach out if any of this isn’t working, or you have any comments, questions, concerns, rants, etc.  Thanks for reading!

Nice to meet you Altaro VM Backup


I’m happy to announce that Altaro is now a sponsor of this site and as I normally do with new sponsors I always like to give a little introductory post containing background on the company and the products they provide.  Altaro was founded in 2009 and have been making a stand in the VM backup space ever since.  Altaro’s early focus was within the Microsoft space, providing backup and restore operations on the Hyper-V platform with their product Altaro Hyper-V Backup.  Everything changed in September of 2015 when their product was renamed to Altaro VM Backup and support for VMware vSphere was added into the solution.

I had a chance to check out early builds of the VMware vSphere support within Altaro VM Backup as a member of their beta.  Timing and business prevented me from ever blogging about the product but going back through my rough notes and searching deep within my (limited)  memory I can say there are a few things that really stood out for me…


Installation of Altaro VM Backup was a breeze!  I utilized the free 1000 CPU hours that we get as vExperts from Ravello Systems to set this up – basically all that is needed is a few clicks and Windows machine to install the software.  It’s your basic Next Next done type of wizard driven install.    It should be noted that once everything is fully setup and configured, the Altaro Management Console can be installed on a remote machine as well, connecting to your main server over the network – meaning there is no need to RDP into the Altaro VM Backup console all the time – a simply connection from your laptop would suffice.


As far as configuration of Altaro VM Backup it can’t get much easier than this!   Altaro’s configuration can be applied to two basic questions that we ask ourselves when we are considering backup; What are we going to backup?  Where are we going to put it?  The first question is answered by simply pointing Altaro to your vCenter server (or individual ESXi/Hyper-V hosts) and providing credentials – from there Altaro will connect to the vSphere API’s and bring back and inventory of your environment.  The second question, Where to put it?, is just a matter of selecting your backup storage.  This can be either a network location (via UNC path) or a physical drive attached to the Altaro Management Console.  additionally Altaro VM Backup provides customers with a means to ship copies of your backups offsite as well.  This can be done either by rotating external USB drives, Network Paths (UNC) or to another instance of an Altaro server running at the secondary location.

Backup and whatnot…

Once you have some source VMs and target storage setup Altaro acts as you would expect, allowing you to set up scheduled backup jobs to run every hour/night/week, etc – or take one-off backups as well.  One nice feature was the ability to simply just drag and drop a VM onto your storage and have it create the job for you automagically!    There are a few other bullet points below that really helped sell Altaro to me

  • VSS support- meaning we can fully quiesce virtual machines to ensure consistent backups
  • Item Level restore support – meaning we can restore individual emails from Exchange, individual files from VMs, etc.
  • Full support for Microsoft Cluster Shared Volumes
  • Compression and Encryption
  • Ability to back up VMs to multiple locations
  • Individualized retention policies applied on a Per-VM basis.
  • Sandbox Restores – allowing you to test for backup integrity and restoreability of your backup files.

With all of these features packed into their first release supporting vSphere I can only hope to see more from Altaro!  Let me reiterate though, the main selling point of the software for me was not a certain feature or support for any platform – it’s the UI!  A clean, crisp, easy to use user interface should be of highly importance for any product which hits the market – a poorly designed on can make or break a customers reaction to your product!  Altaro has done a great job with theirs – their drag and drop functionality is awesome, and everything is easy to find – a very intuitive design!   See for yourself below!  Not to mention that I went from install to backup in less than 10 minutes, without the need to use any documentation!


So with all that Welcome to Altaro to the mwpreston.net family and you should expect to see me go into this software a bit deeper in the future – In the meantime if you want to try out Altaro for yourself you can do so for free – you can go either the 30 day trial route or simply use the product for free for 2 VMs FOREVER!   Needless to say if you are in the market for some backup don’t forget about Altaro!

Manually updating the Veeam Proxy Transport and Mount services

With the release of v9 hitting the Internets on Tuesday I’ve been a very busy man upgrading various Veeam Consoles, Proxies, and Repositories.  With nearly 70 different locations to look after you can imagine the amount of proxies and repositories I have, both on and off site all requiring their respective Veeam services to be upgraded.  Mix that together with a few slower WAN connections and I can almost bet that the automated component update that Veeam ships with will naturally fail on a couple servers.

Want the tl;dr version?
Packages are in c:\Program Files\Veeam\Backup and Replication\Packages – copy them to your failed server and install 🙂


Failed to upgrade host components

When this happens to me usually I get some sort of error message like the following


For the most part, re-running the automated component update will fix the issue, but there are times when it fails, again, and again, and again.  Usually by the third time I resort to manual intervention.

Manually installing the transport/mount service

First up you need to get a hold of the installation files.  These are located on your Veeam Backup and Replication server under the path C:\Program Files\Veeam\Backup and Replication\Packages – You will find there the individual packages for each service that Veeam provides (mount, transport, tape, etc).  Depending on what services your proxy is providing you may need an number of these.  Since my server was acting as a repository as well as a proxy I simply needed the transport and mount server packages (VeeamTransport.msi and VeeamMountService.msi respectively).  Also, don’t forget that Veeam relies heavily on the .NET framework so you must keep that updated as well – you can find the redistributable installation package for that within the packages folder along side the others (NDP452-KB2901907-x86-x64-AllOS-ENU.exe)

Installation is just like any other install – your typical Next->Next->Done type of scenario.  Once you have ran the required packages head back to Veeam Backup and Replication.  If you are still on the component update screen a ‘refresh’ should update the status of the packages – if not, a rescan of your server within the Backup Infrastructure section is required.

So that’s that – as it turns out the issue on why my installation was failing during the automated process was due to lack of disk space, but nonetheless this is good information to have – If you are looking for more information in regards the the new features within Veeam Backup and Replication v9 I’ve done a post here – feel free to check it out!.