As of late I’ve found myself engulfed in configuring and tuning some Veeam replication jobs. As I’ve been monitoring these I’ve noticed that there were a few VMs that seemed to be taking what I considered too long to replicate. After a little bit of investigation I realized that the majority of the job was spent on the Processing Configuration section of the Veeam job.
Now after a look into the logs it appeared that the VM replica in question was actually living on a different datastore than was setup for the destination location of the job. Also, when i went to browse that destination datastore, there were a ton of folders labelled VMName_Replica_1, Replica_2, Replica_3 and so on and so forth, around 50 or so :). So, the solution to solve this issue is either to change your job destination to the proper datastore (where the replica actually is) or to Storage vMotion your replica to the original job destination. I chose the latter as I had many other VMs in the job that already resided on the correct datastore. Anyways, after that the next round of replication sat on Processing Configuration for 30 seconds instead of 40 minutes.
Moral of the story, if you are Processing Config for a long time have a look at your job definitions, more particularly your destination datastore and where your actual replicas are physically located. You probably have a mismatch there. No Storage vMotion – No Problem, check out Veeam’s Quick Migrate – it’ll do the same thing for you.
Does Veeam clean up the old folders in the datastore or do we have to do that manually?
As far as I can recall it did not, pretty sure I had to go and manually clear out some stuff
Yeah, we had to manually remove the empty folders. Easily done though via ssh, with rmdir removing only empty folders. I ended up going to /vmfs/volumes then doing ‘rmdir */*replica*’ to remove all empty folders with replica in it across the datastores.
Thanks for the post back – those empty folders sticking around could certainly be troublesome!