Veeam Backup & Replication is a very easy application to get up and running – but inside that underlying technology there are a lot of moving parts and components that make it easy. Let’s have a look at each on and explain what they do as I’m sure you will see questions revolving around the functionality of the components on the exam.
The Backup Server
The backup server is where Veeam is actually installed. You can think of the Backup Server as being the management plane if you will, coordinating all of the backup jobs, kicking off schedules and instructing other components what to do. The backup server has a few responsibilities
- Coordinates all tasks such as backup, replication, recovery verification and restore
- Controls the scheduling of jobs as well as the allocation of the resources (other components) to those jobs.
- Central management point for your Veeam environment and maintains global settings for the infrastructure
- A default backup proxy and backup repository is automatically configured on the server designated as the Backup server. This allows small environments to get up and running very fast.
The Backup and Replication Console
The B&R console is a client piece of the client/server side application that we use to actually manage our infrastructure. In order to log into a B&R Server with our console, the user needs to be a member of the local administrators group on the B&R server. From there, users can be further limited to what they can do using Veeams role functions.
Some interesting and testable tidbits around the console are
- Multiple users can be logged into a B&R console making changes to the same jobs, however whoever saves their changes first gets priority. Meaning other users will be prompted to reload their wizards to get most recent changes after that user saves his/her changes.
- If a session is lost due to network issues, the sessions are maintained for a maximum of 5 minutes. If the connection is re-established within this time, users are good to.
- Cannot perform a restore from configuration backup when logged in remotely – must do this directly on the backup server itself.
- When a console is installed a number of items are also installed by default during the setup process
- PowerShell Snap-In
- Explorers for Active Directory, Exchange, Oracle, SQL, and SharePoint
- A Mount Server (explained later).
The Backup Proxy
The Backup Proxy is the heavy lifter within the Veeam environment. This handles the movement of data between source and target, whether that be during a backup, a replication, a VM Migration job, or a restore operation – all the data moves through a Veeam Backup Proxy. As I mentioned earlier a default proxy gets installed on our Backup Server during the initial install – and this may be fine and dandy for a small environment but as you find the need to increase performance, concurrency, and scale you will need to add more backup proxies to your environment. Interesting tidbits around the backup proxy…
- Deploys on a Windows machine, can be physical or virtual, and depending on the choice directly affects which backup transport mode is chosen (explained later). Essentially, you can’t do hot add if your machine is physical, however you may want to leverage physical for something like Direct SAN.
- Deployment is fully automated and handled by the Backup Server – you just point it towards a server in your infrastructure.
Depending on whether you are deploying Veeam within VMware of Hyper-V a proxy will use a variety of methods to retrieve data, referred to with Veeam as Transport Modes in VMware, and Backup Modes in Hyper-V. These are defined directly on the proxy properties.
VMware Transport Modes
- Direct SAN Access
- This is the quickest processing most which has the least impact on your production environment as it fully offloads the backup processing.
- Supports block storage only (iSCSI/FC). When using iSCSI both physical and virtual backup proxies can be deployed.
- Direct SAN can be used for all operations involving the proxy, both backup and restore.
- Requirements of Direct SAN Access are…
- The backup proxy needs to have direct access to the production storage through either a hardware or software HBA.
- LUNs must be exposed/zoned/presented to the backup proxy performing the Direct SAN Access. Volumes should be visible in disk management, but not initialized. Veeam automatically sets a SAN Policy within each proxy to Offline shared to help prevent initialization from occurring.
- For restore operations the proxy will need to have write access to the LUNs hosting the disks.
- The process of Direct SAN Access is as follows
- Backup proxy sends a request to the host to locate the necessary VM on the datastore
- ESXi host locates VM and retrieves metadata about the layout of the VMs disks on the storage.
- The host then send data back to the backup proxy via the network
- The backup proxy uses the metadata to copy the VMs data blocks directly from the SAN.
- Proxy processes the data and finally sends it to the target.
- Direct NFS Access (new in v9)
- Recommended for VMs whose disk reside on NFS datastores.
- Veeam will bypass the host and read/write directly from the NFS datastores
- Data still traverses the WAN, however it doesn’t affect the load on the ESXi host.
- Direct NFS can be used for all operations involving a backup proxy, including backup and restore.
- Some limitations to DirectNFS exist and are as follows
- Cannot be used for VMs with a snapshot
- Cannot be used in conjunction with the VMware tools quiescence option.
- If source VM contains disk that cannot be processed utilizing Direct NFS, the disk will be processed in Network Mode.
- The process of Direct NFS is as follows
- Backup proxy send request to host to locate VM on NFS datastore
- Host locates VM and retrieves metadata about the layout of the VMs disk on the datastore and sends back to the backup proxy.
- Backup Proxy uses the metadata to copy VM blocks directly from the NFS datastore, obviously over the LAN – it’s NFS after all.
- Backup proxy processes data and sends them to the target.
- Direct NFS Requirements
- Backup proxy must have access to the NFS datastore
- If the NFS server is mounted to ESXi hosts using names instead of IPs, the IPs need to be resolvable to names on the Backup Proxy
- Virtual Appliance Mode (Hot-Add)
- Easiest mode to set up and can provide a 100% virtual deployment.
- Provides fast data transfers with any storage
- Uses existing Windows VMs
- Utilizes the SCSI/SATA hot-add feature from ESXi to basically attach the source and target disks to backup proxies, thus allowing the proxy to read/write directly from the VMs disk
- Can be used for all proxy operations, including backup and restore.
- The process is as follows
- Backup Proxy sends a request to the host to locate the source VM on the datastore.
- Host locates VM and reports back
- Backup Server triggers vSphere to create a VM snapshot of the processed VM and hot-add or directly attach source VM disks to the backup proxy.
- Proxy reads data directly from the attach disk, processes it and sends it to the target
- Upon completion, Backup server sends commands to remove disks from the backup proxy and delete any outstanding snapshots from the source VM.
- Requirements for Virtual Appliance Mode are…
- Backup Proxy must be a VM
- ESXi host running the proxy must have access to the datastore hosting the disks of the source VMs
- Backup Server and Proxy must have latest version of VMware Tools installed.
- Network Mode
- Network mode essentially uses the LAN to transfer your backups, thus making it one of the least desirable transport modes, especially when dealing with 1GB links.
- Supports any type of storage and is very easy to set up.
- Leverages ESXi Management interface which can be terribly slow, especially on older version of vSphere.
- The process of network mode is as follows…
- Backup Proxy sends the request to the ESXi host to locate the VM on the datastore.
- Host locates VM.
- Data is copied from the production storage and sent to the backup proxy over the LAN using Network Block Device protocol (NBD).
- Proxy processes the data and finally sends it to the target.
Hyper-V Backup Modes
If we are backing up a Hyper-V environment with VBR then our backup proxies are setup a little differently than that of VMware. Basically we have a couple of different Backup Modes within VBR support for Hyper-V
- On-Host Backup Mode
- Easy to use, supported out of the box.
- Good for a small infrastructure
- May impact production host CPU usage as well as provide a bit of overhead network wise.
- Off-Host Backup Mode
- Very fast
- Has no impact on production CPU or network usage.
- Requires an extra physical machine.
- If backing up a Hyper-V cluster with CSV, off host proxy must NOT be a part of the Hyper-V cluster as CSV does not support duplicate LUN signatures
- Requirements of an Off-Host Backup Proxy are
- Must be a physical Windows 2008 R2 or higher server with the Hyper-V role enabled.
- Must have access to the shared storage where the VMs are hosted
- A VSS Hardware provider supporting transportable shadow copies must be installed on both the proxy and the Hyper-V host running the source VM. This is distributed by storage vendors with their client component packages.
Testable tidbits about Backup Proxys
- In terms of sizing, you should allocate 1CPU for each task you’d l8ike the proxy to process
- If backing up a Hyper-V cluster utilizing CSV, ensure proxy is not part of the cluster.
- Off host backup proxies are limited to ONLY PHYSICAL MACHINES
- Direct SAN Limitations
- No VSAN support
- No VVOL support
- In the case of replication, it’s only used ON THE TARGET SIDE during the first full replication of the VM, subsequent jobs will use hot-add or network. Source can use Direct SAN for every run of the job.
- Can only restore thick VM disks
- Direct NFS will not work for VMs containing snapshots, thus, it can only be used on the target side for the first run of a replication job.
- Direct NFS will not work with VMware Tools Quiescence.
- Virtual Appliance Mode Limitations
- IDE disk are not supported.
- SATA disks only supported on vSphere 6.0 or newer.
- vSphere 5.1 or earlier – VM disk size cannot exceed 1.98Tb
Let’s leave this post here for now – we will learn more about proxies and how they are configured in a future module, but the next post will continue on with the VBR core components and talk about Backup Repositories.