Category Archives: Posts
As we continue along the Veeam v9 VMCE Study Guide its time to finish off Module 3 and have a look at Veeam ONE. For me I don’t have a lot of experience with Veeam ONE so this will be a session I try to focus on throughout this guide! Just an update, I’ve written and passed my VMCE at this point, so there’s that! Yay! Either way I’m going to try to complete any unfinished portions I have in efforts of completeness! So with that, let’s get going… Veeam ONE relies heavily on a client-server architecture to work. The architecture of Veeam ONE contains the following components.
Veeam ONE Server
- The Veeam ONE Server is responsible for gathering all of the data from our virtual environment, vCloud Director and Veeam Backup & Replication servers. It takes this data and stores it into its SQL database. Veeam ONE server has a couple of sub components that are broken out as well
- Monitoring Server
- Handles the collection of data to present to the Monitor client or web ui.
- Pulls data from both VMware and Hyper-V as well as Veeam Backup & Replication.
- Reporting Server
- Provides a set of dashboards and predefined reports.
- Verifies configuration issues
- Track implemented changes in the environment
- Adhere to best practices and optimize your environment
- Capacity Management
- Monitoring Server
Veeam ONE Monitor Client
- The Monitor client connects to the monitoring server and basically monitors your virtual environment. This allows us to choose our connections to our virtual servers, our backup infrastructure, and manage alarms and data that is being monitored.
Veeam ONE Business View
- Allows grouping of infrastructure objects into categories that better align to the business
- Groupings/categories are applied to functionality within Monitor and Reporter
- Can be synchronized with vSphere tags.
Interesting tidbits in regards to Veeam ONE
- Can be licensed either per socket or per-VM being monitored
Veeam ONE provides us with a couple different deployment models
Just as VBR gives us the opportunity to consolidate all of the components and services on to one server Veeam ONE does as well. The typical deployment takes Veeam ONE server, Web UI, and Monitor client and installs them all together on the same machine, be it physical or virtual. The SQL instance can also be installed on this machine as well – by default, Veeam ONE packages with SQL 2012 Express. This is a good way to manage a small environment, or to evaluate what Veeam ONE can do for you. If you need to enable multi-user access to the real-time performance it is possible to install the Veeam ONE monitor client on separate machines.
Your typical installation requires at least 4 cores, 64 bit and 8GB of RAM, although 16 is recommended. Must be installed on Windows 7 sp1 or above, and supports SQL, both full and express, from 2005 and up.
The advanced deployment starts to break out some of the individual components to different servers. The Veeam ONE Server, and the WEB UI components are installed on separate machines. Also, Veeam ONE Monitor client can also be installed on multiple separate machines. This deployment can still use the express installation of SQL, however since you are most likely breaking out the components in order to decrease load, you will probably want to install a remote instance of SQL server for this type of setup.
The Veeam ONE server requires at least 4 cores, 64 bit, and 8 GB of RAM, although 16 is recommended. Again, Windows 7 sp1 or above and SQL 2005 and up.
The Web UI server requires minimum 2 cores and only 64 bit OS’s (Win 7 SP1 and up). 2 GB minimum RAM
The Monitor Client requires either 32 or 64 bit OSs (7 SP1 and up) and only 1 socket, along with 1 GB of memory.
Interesting tidbits around Veeam ONE deployments
- Supports vSphere 4.1 and above
- Supports Hyper-V 2008 R2 sp1 and above
- Supports vCloud Director 5.1 and above
- Integrates with Veeam B&R 7.0 update 4 and above (standard and above)
As we continue on Module 3 of the Veeam VMCE v9 Study Guide its time to look at VBR prerequisites, the many deployment scenarios available for VBR and finally what upgrade options we have when upgrading Veeam Backup & Replication to version 9. One of the benefits of deploying Veeam Backup & Replication is that you can make it as simple as you want, or as hard as you want Veeam makes it very easy to deploy VBR and adapt to any size of environment. To help break down the scenarios Veeam provides three different types of deployments for VBR; Simple, Advanced and Distributed
Basically in the simple deployment we are looking at having only once instance of VBR setup and installed on either a physical or virtual machine within our environment. In a simple deployment we have basically one server, the Backup Server, which hosts all the roles and components we need to backup our environment. The Backup server at this point would host the following components
- Veeam Backup Server – for management
- Backup Proxy – for moving data
- Backup Repository – for hosting our backups.
- Mount Server – for restoration
- Guest Interaction Proxy
Interesting tidbits about Simple Deployment
- All components are installed automatically
- The Backup Repository is determined by scanning the volumes of the machine in which we are installing. The volume with the greatest free disk space is used with a “Backup” folder created on it.
- Only used if you are evaluating VBR, or have a small number of VMs you need to protect
- Suggested to install on a VM (but not required) as it would give you the hot-add backup transfer option.
Advanced Deployment is the way to go if you have an environment of any size to back up. In these cases we can’t put all the load on the Backup Server as it would be too much for it to handle. In this deployment model we have the following components
- Backup Server – Our control plane
- Backup Proxies – Data mover components on separate servers to handle the transfer of data.
- Backup repositories – Separate servers containing capacity to store our backup files, VM copies, and replica metadata
- Dedicated Mount Servers – again, separate components in order to efficiently perform application and file level restore back to original production VMs
- Dedicated Guest Interaction Proxies – separate components allowing us to efficiently deploy runtime process in our Windows VMs.
Interesting tidbits about advanced deployments
- Allows us to easily scale up and down to environments by adding more or less components.
- Backup traffic can be dynamically distributed amongst proxies.
- Good setup to begin replicating data offsite by deploying proxies in both local and remote sties.
- Provides HA to our backup jobs by having the ability to allow jobs to failover to other proxies if some become unavailable or overloaded
The distributed deployment is used in cases where environments are spread out geographically with multiple backup servers installed across many locations with the backup servers themselves being federated using Enterprise Manger. This way jobs can all be managed centrally, as well as providing an easy way to search for and find files across all sites. This deployment model contains the following components
- Multiple Veeam Backup Servers for each site
- Multiple Veeam proxies for each site
- Multiple repositories located at each site
- multiple mount servers and guest interaction proxies at each site
- Veeam Enterprise Manager Server
- Optional Veeam Backup Search server to streamline search processes.
Interesting tidbits about the distributed model
- With Enterprise Manager installed, we are able to provide flexible delegation operations to users within the environment to perform restores
- Centralized license management
- All the benefits of the advanced model
Upgrading Veeam Backup & Replication to v9
If you have ever had to upgrade an instance of Veeam Backup & Replication you should know that it is a pretty simple product to upgrade – with that said, you should always do your due diligence – backing up your SQL database and Veeam configuration is always a good idea – as well as ensuring you have completely been through all of the release notes.
There are a few limitations and concerns you might want to pay attention to when looking to upgrade to Veeam Backup & Replication v9
- Supports a direct upgrade from version 7.0 Update 4 and 8.0
- If you have any Windows 2003 servers acting as backup infrastructure components within your current configuration, they will need to be removed before the upgrade as they aren’t supported – this will cause the upgrade to fail.
- The first time you connect to your newly upgraded backup server with a client backup console, they will be prompted to apply the update to their console as well.
- The Console cannot be downgraded
- The first time you login after the upgrade Veeam will prompt you to update all of the other backup infrastructure in your environment such as proxies, repositories, etc. These are upgraded in an automated deployment by the Veeam Backup Server.
Aside from our proxies and repositories there are number of remaining Veeam Backup & Replication Core Components to cover. Today we will try and finish the component section of Module 3 of the Veeam VMCE v9 Study Guide. Some of these components are required, where as some are optional – but all are certainly fair game on the VMCE exam so its best to know them!
Guest Interaction Proxy
During a backup Veeam will interact with the guest to do several things – to do this it deploys a run time process within each VM it is backing up (be it windows or Linux) to do the following options
- Application Aware Processing
- Guest File System indexing
- Transaction Log processing
Older versions all of this was done by the backup server, causing higher resource usage on the Backup server or issues if the backup server and processed VMs had degraded, slow or non-existent network connectivity. As of 9, the process of doing the above 3 actions and deploying these run-time process can be done with a Guest Interaction Proxy (Windows only, will not work with Linux VMs). Again, interesting facts about the GIP.
- Only utilized when processing Windows based VMs. Linux VMs will still receive these packages from the Backup Server.
- Only available in Enterprise and Enterprise Plus editions.
- Can utilize multiple Guest Interaction Proxies to improve performance, recommended to have on at all sites if you have a ROBO setup.
- Can only be deployed on a Windows based server, be it physical or Virtual.
- Must have either a LAN or VIX connection to the processed VM.
- Can be installed on the same server as the proxy, repository, backup server, WAN Accelerator, etc.
- Defined on the Guest Processing step of the backup/replication job. We can assign each job manually to use a certain proxy or let Veeam decide. If letting Veeam automatically determine which proxy to use it will go in the following order
- A machine in the same network as the protected VM that isn’t the Backup Server
- A machine in the same network as the protected VM that is the Backup Server
- A machine in another network as the protected VM that isn’t a Backup Server
- A machine in another network as the protected VM that is a Backup Server.
- If at any point it finds more than one meeting the above criteria, it selects the one which is “less loaded”. The one with the least number of tasks already being performed.
- If at any point a GIP fails, the job can fail over to the Backup Server and utilize it to perform GIP roles as it has done in previous versions.
A mount server is required in order to restore VM guest OS and application items back to their original locations. Veeam uses this server to mount the content of the backup file to a staging server, this server, should be located in the same location as the backup repository where the files are stored, if it isn’t you may end up having restorations traverse a WAN twice. To help prevent this Veeam implements a mount server.
When a file or application item is restored to the original location, Veeam will mount the contents of the backup from the repository onto the mount server, and then copy the data from the mount server to the original location.
Interesting tidbits about mount servers…
- Direct SQL and Oracle restores do not go through the mount server, they are mounted directly to the target VM.
- A mount server is created for every backup repository and associated with it. This is a Repository setting.
- By default the mount server is created on
- Backup Repositories – if they are windows based. The default mount server would be themselves.
- Backup Server – For any Linux based or shared folder backups, and deduplicating storage devices the mount server is the backup server
- Veeam Backup & Replication Console – Anywhere the client is installed so is a mount server, however it isn’t automatically registered within B&R
- Scale-Out Backup Repositories require you to assign a mount server for each and every extent included.
- Mount servers can only be Windows based, but can be physical or virtual.
- In order to restore from storage snapshots the mount server must have access to the ESXi host which will host the temporary VM.
WAN acceleration within Veeam works by using dedicated components to globally cache data and deduplicate data between sites. Basically we would need a WAN accelerator at both our source and target sites to do so. These sit in between the proxies, meaning data would flow through source backup proxy, then to the source wan accelerator, then to the target wan accelerator, then to the target backup proxy, then to either its replication target or backup repository.
Each accelerator will create a folder called VeeamWAN. On the source, files and digests required for deduplication are stored here. On the target, a global cache is stored.
WAN accelerators can require a lot of disk space to hold either the digests or global cache, therefore require some sizing exercises when creating them. Certainly this depends on the amount of source VMs you are backing up, but a rule of thumb is to provide 20GB of disk space for each TB of VM disk capacity. On the target we store Global Cache which is a little less lightweight in terms of capacity requirements. The recommendation here is to provide 10GB of space for each type of OS you are processing – by default, 100GB is allocated, so 10 OSes. Some situations may require us to utilize extra space on the source accelerators depending if digest data needs to be recalculated or we have cleared the cache. In order to help suffice this it’s also recommended you provide 20GB per 1 TB of source VM on your target cache as well.
Interesting tidbits about WAN acceleration
- Must be installed on a 64 bit Windows Based machine, physical or virtual
- Can be intermingled with other proxies and repositories
- For digest data on the source accelerator, provide 20GB of space for each 1 TB of data being backed up.
- For global cache provide 10GB of space for each OS (Default is 100GB)
Veeam Backup Enterprise Manager
This component is optional and is really intended for those that have a distributed deployment containing multiple backup servers. VEB essentially federates your servers and offers a single pain of glass viewing at your backup servers and their associated jobs. From here you can do the following
- Control and Manage jobs
- Edit and Clone Jobs
- Monitor job state
- Report on success/failure across VBR Servers
- Search for guest OS files across VBR Servers and restore via one-click
Interesting tidbits around VEB
- Can be installed on either physical or virtual, so long as its windows
Veeam Backup Search
Veeam Backup Search is an option that will greatly help reduce load from the VEB server if you frequently need to search through a number of backups. Basically, Veeam Backup Search is deployed on a Windows machine running Microsoft Search Server, which basically runs the MOSS Integration service and updates index databases of MSS – leaving VEB the ability to simply pass the Backup Search queries and have the data passed back.
Veeam Gateway Server
The Veeam Gateway server is almost like a connector service, bridging the network between backup proxies and backup repositories. The only time we would need to deploy a gateway server is if we are using one of the following scenarios
- Shared Folder backup repositories
- EMC DataDomain or HPE StoreOnce appliances
ExaGrid, another supported deduplicating appliance with Veeam actually hosts the Veeam Data Mover service directly on the box, Shared Folder backup repositories and the DataDomain/StoreOnce appliances do not – thus, we use a gateway server to host and run the Veeam Data Mover services for them. The gateway server is configured during the “Add Backup Repository” wizard. When prompted we can select our gateway server manually, or chose to let Veeam decide the best fit. If we let Veeam do the choosing our Gateway server is selected following the below criteria
- For a backup job, the role of the gateway server is assigned to the proxy that was first to process VM data for a backup job.
- For Backup Copy jobs, the role of the gateway server is assigned to the mount server associated with the backup repository. If for some reason the mount server is not available this will fail over to any WAN Accelerators that might be used for that job.
- For Backup to Tape jobs the role of the gateway server is assigned to the Veeam Backup Server.
Veeam will select a different number of gateway servers per job depending on the multitasking settings of the repository – PerVM backup chains by default have multiple write streams, therefore each VM will be assigned a gateway server. Where as the normal backup chains only have one gateway server assigned.
A tape server in Veeam Backup and Replication is responsible for hosting a tape device. Simply put its a windows machine that is connected to some sort of tape library. The tape server takes on somewhat of a proxy role for tapes, performing the reading and writing to tapes.
Rubrik, the Palo Alto based company who strives to simplify data protection within the enterprise has recently announced a series C worth a cool 61 million, doubling their total capital to a cool 112 million since founding just over a couple of years ago! And as much as I love to hear about venture capital and money and whatnot I’m much more into the tech as I’m sure my readers are as well! With that, alongside that Series C announcement comes a new release of their product, dubbed Rubrik Firefly!
Rubrik Firefly – A Cloud Data Management Platform
With this third major release from Rubrik comes a bit of a rebrand if you will – a cloud data management platform. Nearly all organizations today have some sort of cloud play in their business; whether that be to build out a private cloud and support legacy applications or consume public cloud resources for cloud native applications – they all have some kind of initiative within their business that aligns with cloud. The problem Rubrik sees here is that the data management and data protection solutions running within those business simply don’t scale to match what the cloud offers. Simply put, customers need to be able to manage, secure, and protect their data no matter where it sits – onsite, offsite, cloud, no matter what stage of cloud they are at – thus spawning the Cloud Data Management Platform
So what’s new?
Aside from a number of improvements and enhancements Rubrik Firefly brings a few big new features to the table; Physical Workloads, Edge Environments, and spanning across clouds. Let’s take a look at each in turn…
I had a chance to see Rubrik a way back at Virtualization Field Day 5 where we got a sneak peek at their roadmap – at the time they supported vSphere only and had no immediate plans for physical workloads. The next time they showed up at Tech Field Day 10 they actually had a bit of a tech preview of their support for physical MSSQL support – and today that has become a reality. As you can see they are moving very fast with development of some of these features! Rubrik Firefly adds official support for those physical SQL servers that you have in your environment, you know, the ones that take up so much resources that the DBA’s just will not let you virtualize. Rubrik can now back these up in an automated, forever incremental fashion and give you same easy of use, efficiency, and policy based environment that you have within your virtual workload backups. Firefly does this by deploying a lightweight Windows service, the Rubrik Connector Service onto your SQL server, allowing you to perform point in time restores and log processing through the same UI you’ve come to know with Rubrik. Aside from deploying the service everything else is exactly the same – we still have SLA policy engine, SLA domains, etc.
And they don’t stop at just SQL! Rubrik Firefly offers the same type of support for those physical Linux workloads you have lying around. Linux is connected into Rubrik through an rpm package, allowing for ease of deployment – From there Rubrik pulls in a list of files and directories on the machine, and again, provides the same policy based approach as to what to back up, when to back it up, and where to store it!
Both the SQL msi installer and the Linux rpm packaged are fingerprinted to the Rubrik cluster that creates them – allowing you to ensure you are only processing backups from the boxes you allow.
Although Rubrik is shipped as a physical appliance we all know that this is a software based world – and that doesn’t change with Rubrik. The real value in Rubrik is the way the software works! Rubrik has taken their software and bundled it up into a virtual appliance aimed for Remote/Branch Offices. What this does is allow those enterprises with remote or branch offices to deploy a Rubrik instance at each location, all talking back to the mothership if you will at the main office. This allows for the same policy based approach to be applied to those workloads running at the remote locations, thus allowing things such as replication back to the main office, archive to cloud, etc to be performed on the edge of the business along with at the main office. The Virtual Appliance is bundled as an ova and sold on a “# of VMs” protected basis – so if you have only a handful of VMs to protect you aren’t paying through the nose to get that protection.
Finally we come to cloud spanning. Rubrik has always supported AWS as a target for archiving backups and brought us an easy to use efficient way of getting just the pieces of data we need back from AWS – but, we all know that Microsoft has been pushing Azure quite heavily as of late handing out lots and lots of credits! You can now take those spare credits and put them to good use as Firefly brings in support for Azure blob storage! The same searching and indexing technology that Rubrik has for Amazon can now be applied to Azure as well, giving customers options as to where they archive their data!
Bonus Feature – Erasure Coding
How about one more? With the Firefly release Rubrik now utilizes erasure coding, bringing in a number of performance and capacity enhancements to their customers with a simple software upgrade! Without putting hard numbers to it customers can expect to see a big increase in their free capacity once they perform the non-disruptive switch over to erasure coding!
Firefly seems like a great step towards the cloud data management platform – a topology agnostic approach to wrapping policy around your data, no matter where it is, ensuring it’s protected and secured! The release of a Virtual Appliance perks my ears up as well – although it’s aimed directly at ROBO deployments now who knows where it might go in the future – perhaps we will see a software-only release of Rubrik someday?!? If you are interested in learning more Rubrik has a ton of resources on their site – I encourage you to check them out for yourself. Congratulations Rubrik on the Series C and the new release!
Continuing along with the core components section of Module 3 we will now look at the backup proxy, both the basic type, as well as the new Scale-Out Backup Repository which was introduced in v9.
So what is a backup repository?
This is where our backup data resides. Actually holds more than just VM backups – keeps backup chains, VM Copies, and metadata for our replicated VMs. There are three types of backup repositories in Veeam
1. Simple Backup Repository
Typically a simple backup repository is just a folder or directory located on a the backup storage where we can store our jobs. We can have multiple backup repositories, and set them up to different jobs in order to limit the number of simultaneous jobs each one is processing, helping to spread the load. A Simple Backup Repository can be installed on
- Windows server with local or direct attached storage – storage can be a local disk, direct attached disk (USB Drive) or an iSCSI/FC LUN mounted to the box. Can be physical or virtual. When a Windows based repository is added the data mover service is installed and utilized to connect to whatever proxy is sending the backup data, helping to speed up the transfer and processing of data. Windows repositories can also be configured to run vPower, giving them the ability to mount their backups directly to ESXi hosts over NFS.
- Linux server with local, DAS, or mounted NFS. – Similar to that of Windows we can use a Linux instance with directly attached storage, iSCSI/FC LUNs, or mounted NFS shares. When a task addresses a Linux target, the data mover service is deployed and ran, again establishing a connection to the source proxy.
- CIFS or SMB share. – an SMB share can be utilized to store your Veeam backups, however it doesn’t have the ability to run the data mover service. In this case, the gateway server (explained later) will be used to retrieve and write data to the SMB share. This affects your deployment, you may want to deploy gateway servers offsite if writing to an SMB share at a remote location in order to help performance.
- Deduplicated storage appliance – Veeam does support EMC Data Domain, ExaGrid and HPE StoreOnce as backup repositories as well.
Interesting tidbits around simple backup repositories
- Data Domain does not necessarily improve performance, but reduces load on network
- Data Domain does not support reverse incremental and cannot exceed that of 60 restore points in incremental backup chains.
- ExaGrid jobs actually achieve a lower deduplication ratio when using multi-task processing. It’s better to do a single task at a time.
- When using StoreOnce Veeam needs the CAtalyst agent installed on the gateway proxy.
- HPE StoreOnce always uses per-vm backup files
- HPE StoreOnce does not support reverse incremental nor does it support the defrag and compact full backup options.
2. Scale-Out Backup Repository
The scale out backup repository essentially takes several similar simple repositories and groups them together to pool one large backup repository. This way as you approach your capacity within the SOBR, you can simply add another repository, or extent to the pool, increasing your overall capacity.
When a simple backup repository is added as an extent to a SOBR, Veeam creates a definition.erm file. This file contains all of the descriptive information about the SOBR and its respective extents.
One setting that must be setup on a SOBR is the Backup file placement policy. This basically determines how the backup files will be distributed between extents. There are two Backup file placement polices available
- Data Locality
- All backup files which belong to the same chain will be stored on the same extent.
- New full backups could reside on another extent, but the incremental thereafter would also be placed on this new extent – where as the old full and old incremental would remain on another extent.
- Full and incremental backups that belong to the same chain are stored on different extents.
- Improves performance on transforms if raw devices are in use as it spreads the I/O load across extents.
- If an extent is missing containing any part of a targeted backup chain Veeam will not be able to perform the backup. That said, you can set the ‘Perform full backup when required extent is offline” setting in order to have a full backup performed in the event it can’t piece together the chain, even if an incremental is scheduled.
All this said, the placement policy is not strict – Veeam will always try and complete a backup on another extent with enough free space if an extent is not available, even if you have explicitly said to place full backups on a certain extent.
When selecting extents to place backups, Veeam goes through the following processes.
- Looks for availably of extents and their backup files. If an extent is not available containing part of the chain, Veeam triggers a full backup to a different extent
- It then takes into consideration the backup placement policy
- Then it looks at free space on the extents – it is placed on the extent with the most free space.
- Availability of the backup files form the chain, meaning, an extent that has incremental backups from the current backup chain will have a higher priority than an extent that doesn’t
During the start of a job, Veeam guestimates how much space a backup file will require and compares that to of what is available on the extents. It does this in a couple of different ways depending on your backup file settings.
- Per-VM Backup Chains – In determining the full backup file size it calculates by taking 50% of the source VM size. Incrementals are 10% of the source VM size
- Single File Backup Chain – The size of the full is equal to 50% of the source VMs in the job. The first incremental is determined by taking 10% of the source VMs size – subsequent incrementals are equal to that of the size of the incremental before them.
Extents within a SOBR also have some service actions that can be performed as explained below
- Maintenance Mode – This is mainly used if you need to perform some kind of maintenance on the server hosting the underlying extent such as adding memory or replacing hardware. When an extent is in maintenance mode you cannot perform any tasks targeted at the extent nor can you restore any data that resides on this extent or backup chains that have data on the extent. When entering maintenance mode Veeam first checks to see if any jobs are currently using the extent. If they aren’t, it immediately goes into maintenance mode – if they are, it gets placed into a Maintenance pending state and waits for the tasks to complete, once done, it enters maintenance mode.
- Backup Files Evacuation – This is used if you would like to remove an extent from a SOBR that contains backup files. When doing this, Veeam moves the backup files on this extent to other extents that belong to the same SOBR. Before evacuating, you must first place extents into maintenance mode. Veeam attempts to abide by its placement policies when looking where to place the evacuated backup files.
Some interesting tidbits around SOBR
- extents can be mixed and matched, meaning we can have windows repositories, Linux repositories and dedup appliances all providing storage for one SOBR.
- Used for Backup, Backup Copy, and VeeamZIP jobs only – note the difference – no configuration backups or replication metadata is stored on a SOBR. If you try and add an extent to a SOBR that is configured inside of any other jobs it will not add – you will first need to target these jobs to another repository. Further more, if a backup repository is configured as a SOBR extent, you will not be able to use it for any other jobs.
- Only Available in Enterprise and Enterprise Plus, however Enterprise does have limitations. Only one SOBR can be created, and can only contain 3 extents. If you downgrade licenses while you have a SOBR you will still be able to restore from it, but jobs targeted at it will no longer run.
- When a backup repository is converted to an extent the following information is inherited to the extent
- Number of Simultaneous tasks
- Read and write data limit
- Data compression settings
- block alignment
- Limitations on the underlying repository – EMC data domain has a backup chain limit of 60 points, therefore if we use this as an extent in our SOBR, our SOBR will have the same chain limit.
- Settings that are not inherited include any rotated drive settings as well as Per-VM backup file settings. Per VM needs to be configured globally on the SOBR.
3. Rotated Drive Backup Repositories
Backup repositories can also use rotated drives. Think storing backups on external USB drives where you regularly swap these drives in and out to take offsite. This is setup by using the ‘This repository is backed by rotated drives’ option on the backup repository.
A backup that targets rotated drives goes through the following process.
- Veeam creates the backup chain on whatever drive is currently attached
- Upon a new session, Veeam checks if the backup chain on the currently connected drive is consistent, meaning it has a full backup as well as subsequent incrementals to restore from. If the drives had been swapped, or the full/incremental backups are missing from the drive then Veeam will start a new chain, creating a new full backup on the drive which will then be used for subsequent incrementals. If it is a backup copy job Veeam simply creates a new incremental and adds it to the chain.
- For any external drives attached to Windows Servers Veeam will process any outdated restore points from the retention settings and remove them from the drive if need be.
- When any original drives get added back into the mix, Veeam repeats this process creating full backups if need be.
Interesting tidbits about repositories backed by rotated drives
- Veeam can remember and keep track of drives on Windows Servers even if the drive letter changes. It does this by storing a record about the drive within its configuration database.
- When a drive is first inserted Veeam has no idea about it, so it must have the exact same letter that is associated in the path to folder setting on the repository. After this, Veeam stores the information in regards to the drive in the database.
- After reinserting a drive that is already in the configuration database, Veeam will still use this successfully, even if the drive letter doesn’t match that of the path to folder.
- GFS Full Backups cannot be created with Backup Copy jobs on rotated drives
- Per-VM backup files are not supported on rotated drives
Veeam Backup & Replication is a very easy application to get up and running – but inside that underlying technology there are a lot of moving parts and components that make it easy. Let’s have a look at each on and explain what they do as I’m sure you will see questions revolving around the functionality of the components on the exam.
The Backup Server
The backup server is where Veeam is actually installed. You can think of the Backup Server as being the management plane if you will, coordinating all of the backup jobs, kicking off schedules and instructing other components what to do. The backup server has a few responsibilities
- Coordinates all tasks such as backup, replication, recovery verification and restore
- Controls the scheduling of jobs as well as the allocation of the resources (other components) to those jobs.
- Central management point for your Veeam environment and maintains global settings for the infrastructure
- A default backup proxy and backup repository is automatically configured on the server designated as the Backup server. This allows small environments to get up and running very fast.
The Backup and Replication Console
The B&R console is a client piece of the client/server side application that we use to actually manage our infrastructure. In order to log into a B&R Server with our console, the user needs to be a member of the local administrators group on the B&R server. From there, users can be further limited to what they can do using Veeams role functions.
Some interesting and testable tidbits around the console are
- Multiple users can be logged into a B&R console making changes to the same jobs, however whoever saves their changes first gets priority. Meaning other users will be prompted to reload their wizards to get most recent changes after that user saves his/her changes.
- If a session is lost due to network issues, the sessions are maintained for a maximum of 5 minutes. If the connection is re-established within this time, users are good to.
- Cannot perform a restore from configuration backup when logged in remotely – must do this directly on the backup server itself.
- When a console is installed a number of items are also installed by default during the setup process
- PowerShell Snap-In
- Explorers for Active Directory, Exchange, Oracle, SQL, and SharePoint
- A Mount Server (explained later).
The Backup Proxy
The Backup Proxy is the heavy lifter within the Veeam environment. This handles the movement of data between source and target, whether that be during a backup, a replication, a VM Migration job, or a restore operation – all the data moves through a Veeam Backup Proxy. As I mentioned earlier a default proxy gets installed on our Backup Server during the initial install – and this may be fine and dandy for a small environment but as you find the need to increase performance, concurrency, and scale you will need to add more backup proxies to your environment. Interesting tidbits around the backup proxy…
- Deploys on a Windows machine, can be physical or virtual, and depending on the choice directly affects which backup transport mode is chosen (explained later). Essentially, you can’t do hot add if your machine is physical, however you may want to leverage physical for something like Direct SAN.
- Deployment is fully automated and handled by the Backup Server – you just point it towards a server in your infrastructure.
Depending on whether you are deploying Veeam within VMware of Hyper-V a proxy will use a variety of methods to retrieve data, referred to with Veeam as Transport Modes in VMware, and Backup Modes in Hyper-V. These are defined directly on the proxy properties.
VMware Transport Modes
- Direct SAN Access
- This is the quickest processing most which has the least impact on your production environment as it fully offloads the backup processing.
- Supports block storage only (iSCSI/FC). When using iSCSI both physical and virtual backup proxies can be deployed.
- Direct SAN can be used for all operations involving the proxy, both backup and restore.
- Requirements of Direct SAN Access are…
- The backup proxy needs to have direct access to the production storage through either a hardware or software HBA.
- LUNs must be exposed/zoned/presented to the backup proxy performing the Direct SAN Access. Volumes should be visible in disk management, but not initialized. Veeam automatically sets a SAN Policy within each proxy to Offline shared to help prevent initialization from occurring.
- For restore operations the proxy will need to have write access to the LUNs hosting the disks.
- The process of Direct SAN Access is as follows
- Backup proxy sends a request to the host to locate the necessary VM on the datastore
- ESXi host locates VM and retrieves metadata about the layout of the VMs disks on the storage.
- The host then send data back to the backup proxy via the network
- The backup proxy uses the metadata to copy the VMs data blocks directly from the SAN.
- Proxy processes the data and finally sends it to the target.
- Direct NFS Access (new in v9)
- Recommended for VMs whose disk reside on NFS datastores.
- Veeam will bypass the host and read/write directly from the NFS datastores
- Data still traverses the WAN, however it doesn’t affect the load on the ESXi host.
- Direct NFS can be used for all operations involving a backup proxy, including backup and restore.
- Some limitations to DirectNFS exist and are as follows
- Cannot be used for VMs with a snapshot
- Cannot be used in conjunction with the VMware tools quiescence option.
- If source VM contains disk that cannot be processed utilizing Direct NFS, the disk will be processed in Network Mode.
- The process of Direct NFS is as follows
- Backup proxy send request to host to locate VM on NFS datastore
- Host locates VM and retrieves metadata about the layout of the VMs disk on the datastore and sends back to the backup proxy.
- Backup Proxy uses the metadata to copy VM blocks directly from the NFS datastore, obviously over the LAN – it’s NFS after all.
- Backup proxy processes data and sends them to the target.
- Direct NFS Requirements
- Backup proxy must have access to the NFS datastore
- If the NFS server is mounted to ESXi hosts using names instead of IPs, the IPs need to be resolvable to names on the Backup Proxy
- Virtual Appliance Mode (Hot-Add)
- Easiest mode to set up and can provide a 100% virtual deployment.
- Provides fast data transfers with any storage
- Uses existing Windows VMs
- Utilizes the SCSI/SATA hot-add feature from ESXi to basically attach the source and target disks to backup proxies, thus allowing the proxy to read/write directly from the VMs disk
- Can be used for all proxy operations, including backup and restore.
- The process is as follows
- Backup Proxy sends a request to the host to locate the source VM on the datastore.
- Host locates VM and reports back
- Backup Server triggers vSphere to create a VM snapshot of the processed VM and hot-add or directly attach source VM disks to the backup proxy.
- Proxy reads data directly from the attach disk, processes it and sends it to the target
- Upon completion, Backup server sends commands to remove disks from the backup proxy and delete any outstanding snapshots from the source VM.
- Requirements for Virtual Appliance Mode are…
- Backup Proxy must be a VM
- ESXi host running the proxy must have access to the datastore hosting the disks of the source VMs
- Backup Server and Proxy must have latest version of VMware Tools installed.
- Network Mode
- Network mode essentially uses the LAN to transfer your backups, thus making it one of the least desirable transport modes, especially when dealing with 1GB links.
- Supports any type of storage and is very easy to set up.
- Leverages ESXi Management interface which can be terribly slow, especially on older version of vSphere.
- The process of network mode is as follows…
- Backup Proxy sends the request to the ESXi host to locate the VM on the datastore.
- Host locates VM.
- Data is copied from the production storage and sent to the backup proxy over the LAN using Network Block Device protocol (NBD).
- Proxy processes the data and finally sends it to the target.
Hyper-V Backup Modes
If we are backing up a Hyper-V environment with VBR then our backup proxies are setup a little differently than that of VMware. Basically we have a couple of different Backup Modes within VBR support for Hyper-V
- On-Host Backup Mode
- Easy to use, supported out of the box.
- Good for a small infrastructure
- May impact production host CPU usage as well as provide a bit of overhead network wise.
- Off-Host Backup Mode
- Very fast
- Has no impact on production CPU or network usage.
- Requires an extra physical machine.
- If backing up a Hyper-V cluster with CSV, off host proxy must NOT be a part of the Hyper-V cluster as CSV does not support duplicate LUN signatures
- Requirements of an Off-Host Backup Proxy are
- Must be a physical Windows 2008 R2 or higher server with the Hyper-V role enabled.
- Must have access to the shared storage where the VMs are hosted
- A VSS Hardware provider supporting transportable shadow copies must be installed on both the proxy and the Hyper-V host running the source VM. This is distributed by storage vendors with their client component packages.
Testable tidbits about Backup Proxys
- In terms of sizing, you should allocate 1CPU for each task you’d l8ike the proxy to process
- If backing up a Hyper-V cluster utilizing CSV, ensure proxy is not part of the cluster.
- Off host backup proxies are limited to ONLY PHYSICAL MACHINES
- Direct SAN Limitations
- No VSAN support
- No VVOL support
- In the case of replication, it’s only used ON THE TARGET SIDE during the first full replication of the VM, subsequent jobs will use hot-add or network. Source can use Direct SAN for every run of the job.
- Can only restore thick VM disks
- Direct NFS will not work for VMs containing snapshots, thus, it can only be used on the target side for the first run of a replication job.
- Direct NFS will not work with VMware Tools Quiescence.
- Virtual Appliance Mode Limitations
- IDE disk are not supported.
- SATA disks only supported on vSphere 6.0 or newer.
- vSphere 5.1 or earlier – VM disk size cannot exceed 1.98Tb
Let’s leave this post here for now – we will learn more about proxies and how they are configured in a future module, but the next post will continue on with the VBR core components and talk about Backup Repositories.