In Part 1 of our Veeam Backup Maintenance series, we took a look at how we can leverage Storage-level corruption guard to auto-heal and repair our storage blocks that become corrupt either by re-hydrating data from the source or by performing new active full backups to obtain new data. In this part, we will take a look at a feature that can help us keep our full backup files in tip-top shape, all the while ensuring we are getting the most out of our storage capacity available; Backup File Defragment & Compact.
It sounds good, but why?
This is the first question that usually always pops into my mind before I go about changing an advanced setting in any product – why would I want to do this? Defragmenting and Compacting certainly sounds like something I would want to do, but what are variables that cause our full backup files to become fragmented and need compaction in the first place. Let’s try and explain this using the illustrations below.
As we can see above those backup jobs which use methods requiring transformation (IE Forever Forward or Reverse Incremental) perform a lot of transformation on the full backup file. As retention policies are met for the specific job certain restore points are merged into the full backup file and then deleted – after running a number of jobs we may end up with a full backup file similar to the one shown on the right – the backup file continuing to grow larger and larger and at the same time writing different blocks from the same VM in a non-continuous way. As you may have guessed this will definitely cause some performance implications when the time comes to restore these VMs. To help combat this we can enable the “Defragment and Compact full backup file” setting within Veeam
Defragmentation and Compaction
When we chose to perform a full backup file defragmentation and compaction a number of things happen. First Veeam will create a second full backup file (vbk) to work with. It then copies only the valid data blocks from our full backup into the newly created empty file. As we can see below this actually continuously aligns our VMs within the full backup file, thus improving read performance when we need it during a restoration.
Another big benefit of the defragmentation/compaction operation in Veeam is the ability to remove those VMs which have been deleted or removed from our backup job. As shown below we can see that during the compaction process any blocks which contain data from VMs which have been removed from the job (VM3 in our case) are not processed or copied into our newly created vbk file – in the end, we are left with full backup file that is much smaller, thus allowing us to better leverage the capacity available to us for backup storage.
Although Defragmentation and Compaction sound like a no-brainer to enable there are also a number of gotchya’s that you have to watch out for. The first, and most obvious in my opinion is that your backup storage must contain enough capacity to actually hold 2 full backups at the same time. For instance, if you are in forever incremental you will have one full backup with a number of incrementals – in order for Veeam to compact that full backup file, it will need to have the capacity to build another full backup file before the older one can be purged. Also, this feature can only be enabled on jobs that do NOT have active full or synthetic full backups enabled. Performing these types of full backups would essentially produce a defragmented/compacted full backup file anyways, thus defrag and compact is not needed in these situations. Even if an active full was run manually outside of the backup job, the defrag and compact will detect this and not run on that given day. For more information certainly have a read of the Compact Full Backup section in the Veeam User Guide.
So, in the end, there are a number of ways Veeam can perform maintenance on our backup storage. Certainly protecting against and storage corruption with Storage-level corruption guard and helping to increase read speeds and better take advantage of capacity with full backup file defragmentation and compaction are two awesome advanced settings with a backup job! As always different features are not a one size fits all and you should always investigate and test enabling/disabling this functionality in your own environment – but hopefully this series has helped you understand a little bit about how each feature works – I know it has for me! Thanks for reading!