For a while Veeam has been able to execute scripts post-job, or after the job completes – but it wasn’t until version 8 of their flagship Backup and Replication product that they added the ability to run a pre-job script, or a script that will execute before the job starts. When v8 first came out with the ability to do this I strived to try and figure out what in the world I would need a pre-job script for – and for the longest time I never used it in any of my environments. If a job failed I would execute post job scripts to run and hopefully correct the reason for failure – but a while back it kind of dawned on me – and with a bit of a change in mindset I realized something – Why fail first?
Why fail when success is possible?
As I mentioned above I’d grown accustom to using post-job scripts to correct any failing jobs. For instance, there were times when for whatever reason a proxy would hold on to a disk of one of my replica’s – subsequently, the next run of this job would fail trying to access this disk – and even more importantly consolidation of any VMs requiring it would fail as the original replica couldn’t access the disk mounted to the proxy. What did I do to fix this? Well, I added script that executed post-job looking to simply unmount any disks off of my Veeam proxies that shouldn’t be mounted.
Another scenario – I had some issues a while back with some NFS datastores simply becoming inaccessible. The fix – simply remove and re-add them to the ESXi host. The solution at the time was to run a post-job script in Veeam. If the job failed with the error of not being able to find the datastore then I ran a script that would automatically remove and re-add the datastore for me – Next job run everything would be great!
“Fail and Fix” or “Fix and Pass”
So, the two solutions above, while they do fix the issues they do it after the fact – after we have already failed. Even though it fixed everything up for the next run of the job I’d still lose that one restore point – and sure enough, the time WILL come where it’s that exact point in time you will need to recover from! The answer to all this is pretty simple – migrate your post-job scripts to pre-job scripts. Let’s set ourselves up for success before we even start our job! Although this may seem like common sense – for whatever reason it took a while before I saw it that way.
So with all that – hey, let’s add some code to this post. Below you will find one of my scripts that runs before each Veeam job – my proactive approach to removing non-veeam proxy disks from the proxies!
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 |
Add-PSSnapin VeeamPSSnapIn Add-PSSnapin VMware.VIMAutomation.core Connect-VIServer vcenter.mwpreston.local -u username -pass password # get job name out of parent process id $parentpid = (Get-WmiObject Win32_Process -Filter "processid='$pid'").parentprocessid.ToString() $parentcmd = (Get-WmiObject Win32_Process -Filter "processid='$parentpid'").CommandLine $jobid = $parentcmd.split('" "')[16] $vbrjob = get-vbrjob | where-object { $_.Id -eq "$jobid" } #get some info to build replica VM names $suffix = $vbrjob.options.ViReplicaTargetOptions.ReplicaNameSuffix $vms = $vbrjob.getObjectsInJob() #create array of replica names $replicasinjob = @() foreach ($vm in $vms) { $replica = $vm.name+$suffix $replicasinjob += $replica } #loop through each replica and check Veeam proxies for foreign disks foreach ($replicaitem in $replicasinjob) { $replica = $replicaitem.tostring() Get-VM -Location ESXCluster -Name VBR* | Get-HardDisk | where-object { $_.FileName -like "*$replica*"} | Remove-HardDisk -Confirm:$false } exit |
So as you can see this is a simple script that basically retrieves the job name it was called from (Lines 7-10) – By doing it this way we can reuse this block of code in any of our jobs. Then simply searches through all of the disks belonging to Veeam proxies (Line 28) – if it finds one that belongs to one of our replica’s we are about to process, it removes it. Simple as that! Now, rather than failing our job because a certain file has been locked, we have set our self up for a successful job run – without having to do a thing! Which is the way I normally like it 🙂 Thanks for reading!