Tag Archives: VMCE

VMCE v9 Study Guide – Module 4 – WAN Accelerators and Managing Network Traffic

VMCE LogoI didn’t really see WAN Acceleration mentioned anywhere within the course description of the VMCE class, so I decided this might be the best place to fit it in since we will be talking about managing network traffic in Module 4.  That said, I’m sure the topic will be brought up again in later modules, however let’s go over what we can here!

 

Wan Acceleration

WAN Acceleration is Veeam’s answer to help optimize VM traffic that will be going over the WAN.  It does this by deploying at least 2 WAN Accelerators on 64 bit Windows Servers, one located at the source, and one located at the target.  If you remember back to Module 3 we spoke a bit about WAN Acceleration so some of this may be a repeat, however its good to know for the exam.

Configuring WAN Accelerations happens in the following way

  • Configure Source side WAN Accelerator, then the target.
    • Launch the New WAN Accelerator wizard from the Backup Infrastructure view
    • From the Server step
      • specify the Windows Server you wish to use for the accelerator
      • provide a description
      • Traffic Port – Specify network port used for source to target communication – defaults to 6165
      • Streams – Number of connections that must be used to transmit data (defaults to 5).  Keep in mind as this number increases so will the bandwidth and accelerators resources it requires.  Applies only to the source WAN Accelerator.
    • Cache – location of service files and global cache
      • Folder – Path o location where service files (for source and target) or Global Cache (target only) must be stored.  Defaults to c:\VeeamWAN.  It’s also best not to nest these deep in the file system as service file names can be very long, no use in making them longer.
      • Size – Specify a size for the target WAN Accelerator according to the sizing best practices – we will go over this below
    • Review
      • Review components to be installed (data mover service, WAN Accelerator service) and click ‘Next’ to finish.

Clearing/Populating Global Cache

These process can all be accomplished by right clicking on the WAN Accelerator within the Wan Accelerators node in the Backup Infrastructure view and selecting the desired operation (process explained below)

WAN Accelerator Sizing

As mentioned above there are some best practices we need to take when correctly sizing how much space we need for WAN Accelerators, both source and target.

Source WAN Accelerator

  • Veeam analyzes data blocks that will go to target and digests them, these are stored in our source accelerator.
  • Size of cache on source accelerator depends on the capacity of all our source VM disks.
    • Every 1TB of data requires 20GB of cache space.  IE if you have 4TB of VM disks you are backing up, you should provide 80GB of cache on the source accelerator.
    • There is no global cache on the source, only the digest metadata is stored here.  Global is just for target accelerators.

Target WAN Accelerator

  • This is where our global cache is stored.
    • Global Cache is basically a library that holds data blocks that go from source to target.
    • Populated fully on the first cycle of a job.
    • If a new data block is constantly sent across the WAN, it will be added to the global cache.
    • If an already cached block is not sent over the WAN after a period of time, it will be removed from the global cache.
    • If a periodic check deems a block in the global cache is corrupt, it will remove it.
    • Global cache can copy blocks stored from one source accelerator folder to another source accelerator folder if they are the same, meaning if we have two locations each replicating a Windows 2012 server, we can simply copy blocks from the first cache to the second cache without having to send them across the WAN.
    • The Global Cache can be pre-populated without actually running the job.
      • Useful on the first run of a job so all data blocks don’t need to be copied
      • Useful if the cache becomes corrupt to prevent all data blocks to be copied again.  This requires you to clean the cache first
      • Encrypted backups are not used for population
      • You cannot start any jobs using the accelerator while the cache is being populated.
      • Veeam uses data blocks stored in specified repositories to populate the cache – only OS blocks are copied.
        • That said if there is other accelerator cache already located in the target, it will match OSs from the source repository and copy these blocks directly from the already existing cache folders if they exist.
      • Copied to a default cache folder, when a remote job starts Veeam renames this to the source accelerator used in the job.
  • Recommended to provide 10GB of cache per every type of OS utilized. (defaults to 100GB, so 10 OSes).  IE – say we backup 10 VMs (1xWin7, 6xWin2008, 3xWin2010) we should provide at least 30GB (3 OS types x 10GB).
  • If the Digests data on the source accelerator is missing or for some reason cannot be used, the target accelerator will have to re-calculate this, therefore, will require space to do so.  Therefore the same rule of source sizing applies also to target, in addition to the OS type cache allocated.  IE those 10 VMs also occupy 4TB of space we will need to add 80GB (20GB/TB * 4)  more cache space in addition to our OS cache.  So 80GB for digest calculation and 30GB per OS caching = 110GB total.
  • All this said, Global Cache is calculated per source accelerator.  Within Veeam we have the ability to apply a many to one situation, meaning many source accelerators running through 1 target accelerator.  This changes our cache size exponentially depending on the number of source accelerators.  The formula is as follows
    • Total Cache Size = (number of source accelerators) * ( Size of target WAN accelerators properties [10gb/OS]) + 20GB/TB of source data.
    • Let’s say we add a second source accelerator to our example we have been using.  The second accelerator has 1TB of source data spread across 2 OS types (Linux, Server 2003).  We would end up with the following for a global cache size
      • Total Cache size = 2(we have two source accelerators) * 50GB (5 OS types [Linux, server 2003, server 2008, server 2012, win7) at 10GB per) + 100GB ( 5TB of source data spread across the 2 source locations)
      • 2 * 50GB + 100GB = Total Cache Size of 200GB
  • With all of this, if you have the space it’s best to add as much as you can in order to obtain more efficient acceleration as it would be able to hold more repeating data blocks.

Data Block Verification

Veeam calculates checksums on blocks being transferred between source and target to help ensure that no corrupt blocks are stored in the global cache.  This works in the following way

  • Before sending, Veeam calculates a checksum on the block
  • When the target receives the block it re-calculates this checksum (before it is even written to cache).
  • The checksums are compared, if there is a difference, the target sends a request for the source to resend, upon receiving the block again, it is written to the global cache.

WAN Acceleration works in the following way

  • If using a backup copy job, Veeam uncompressed backup file to analyze content
  • Source accelerators analyzes data blocks and creates file with digest for blocks.
  • Veeam compresses data and sends it to the target
  • Target populates global cache with blocks from the copied file
  • On the next job cycle, source analyzes data blocks in the file that need transferred and creates digests just for these blocks
  • Source compares new digests with old – if duplicate blocks are found the file is not copied over the WAN.  Instead, the target will pull this file from the global cache
  • Also, restore points already existing on the target side are analyzed – if there is a duplicate located in them, the target will take them directly from the restore points.

Managing Network Traffic

Before we get into some of the ways we can throttle and manage our network manually, let’s have a look at a couple different ways Veeam manages network disconnects automatically.

Data Transport on WAN Disconnect

This type of reconnection attempt exists only on jobs who utilize WAN accelerators.  Basically if a connection drops while we are transferring VM data between accelerators VBR will pick up and resume the job from the point where the connection was lost when services are restored, rather than starting all over again.  When the connection is restored, VBR will initiate a new transfer process, this time writing data to a new working snapshot.  If the connection drops multiple times, veeam will only keep 2 working snapshots on the VM by merging previous ones together.  Once all data has made its way to the target, all snapshots are merged and a new restore point is created.

Resume on Disconnect

This process handles network disconnects not applying to accelerators, and handles disconnects between backup server, proxies, and repositories (storing replica metadata).  VBR will attempt to reestablish the connection every 15 seconds for 30 minutes, picking up right where it left off.

Network Traffic Throttling Rules

Network throttling rules are setup and enforced globally on the backup server.  They essentially limit the maximum throughput of traffic going from source to target.  They are set with a pair of IP addresses, source ip, and target ip.  If a component within the backup infrastructure fall into the specified source and target IP range, the rule is applied to them.  The steps to set them up are as follows…

  • Select Network Traffic from the Main Menu and click ‘Add ‘ in the Global Network Traffic Rules section.
  • In the source ip range, specify a range of IPs representing the source components
  • In the target IP range, specify a range of IPs representing the target components.
  • Select the box to Throttle Network traffic
    • Specify the maximum speed that must be used to transfer VM data to in the Throttle to field
  • In the Apply throttling we can set up a schedule in which this rule will apply, or have it apply all the time.
    • If a rule has overlapping schedules, the rule with the lowest maximum speed will apply
  • Network Data Encryption is also setup in this same manner with the Encrypt network traffic checkbox.  More on network encryption below

Managing Data Transfer Connections

By default Veeam uses 5 TCP/IP connections to transfer data from source to target.  This may cause network traffic to be heavy if multiple jobs run at the same time.  This can also be changed in the Global Network Traffic Rules settings using the ‘Use multiple upload streams per job’ selection box.

Enabling Network Encryption

By default Veeam encrypts data with 256-AES flowing to/from public IPs, however you may want to have encryption between your local/remote source and targets.  Again this is done in the Global Network Traffic Rules window by clicking add.  It’s the same process as setting up throttling rules (above), however checking the ‘Use Network Encryption’ box.

Specifying priority networks for transfer

VBR gives you the ability to specify what networks you want to send your VM data on.  This is useful if you have some sort of backup network or non-production network that is utilized for backup data.  Again from the Global Network Traffic Rules section we set this up

  • Click on Networks
  • Select to ‘Prefer the following networks for backup and replication traffic’ and click ‘Add’
  • Specify a network in a CIDR notation or mask
  • VBR will failover to the production network if for some reason the preferred networks are unavailable.

VMCE v9 Study Guide – Module 4 – Adding Backup Repositories

VMCE LogoIf you can recall in Module 3 we discussed the three types of backup repositories in VBR; Simple, Scale-Out and those backed by rotated drives.  Now let’s go over how to add and configure each type as we continue on with Module 4 of the VMCE v9 Study Guide!

 

Adding Simple Backup Repositories

If we can remember back to Module 3 we actually have 4 sub types of simple backup repositories; Microsoft Windows, Linux, Shared CIFS, or Deduplicated Storage Appliances.

There are a number of prerequisites we must meet depending on the type of repository we are adding, listed below

  • Linux repositories
    • Requires SSH daemon installed and configured as well as SCP utility available on the Linux server hosting the repository
  • EMC Data Domain – note without meeting these requirements you can still add DD as a CIFS share, however you will not be able to leverage any DD Boost technology.
    • Must be running DD OS 5.4 or later
    • DD Boost license must be installed and DD Boost enabled and configured
    • Must have a gateway server added to the VBR environment
  • ExaGrid
    • Must be firmware 4.7 or later
    • Must follow ExaGrid best practices to set up
  • HPE StoreOnce – without meeting these requirements you can still add HPE StoreOnce as a shared folder, however in this case VBR will perform the deduplication.
    • Must be running firmware 3.13.1 or later
    • Must have StoreOnce Catalyst license
    • Must use Catalyst as a backup target  and configured to work with Low Bandwidth mode (primary and secondary transfer policy)
    • Must have a gateway server added to the VBR Environment
    • Client account used to connect must have access permission on the Catalyst store where backup data resides

Different options will appear in the wizard depending on the type of repository we are adding, however the process of adding it is somewhat the same.

  • From the Backup Infrastructure View right-click the Backup Repositories node and select Add Backup Repository
  • Name – specify the FQDN or IP address, as well as a description for the backup repository.
  • Type – select the type of repository you want to add.
    • Microsoft Windows server
      • Server – Select the Windows server you would like to use from the drop down.  If the server hasn’t already been added you can do so by clicking Add New.  Clicking Populate will populate a list of disk storage connected to the server.
    • Linux Server
      • Server – Select the Linux server you would like to use from the drop down.  If the server hasn’t already been added you can do so by clicking Add New.  Clicking Populate will populate a list of disk storage connected to the server.
    • Shared Folder
      • In the shared folder field, specify the UNC path to the folder you want to use.
      • If the share requires credentials, select the ‘This share requires access credentials’ and provide credentials.
      • If you have a fast connection between the source and backup repository we can leave the gateway server at automatic selection.  This will automatically chose a gateway server randomly per job session.  If the connection is slower or over a WAN we can explicitly specify which gateway server to use.
    • Deduplicated Storage Appliance
      • Deduplicated Storage – Select either EMC, ExaGrid, or HP StoreOnce
        • Data Domain
          • Specify the connection settings to the data domain.  If connecting over FC select ‘User Fibre Channel’ and enter a Data Domain Fibre Channel server in the domain server name field.
          • Specify credentials supporting DD Boost
          • Select whether to use in flight encryption.
          • Specify a Gateway server or leave set to automatic if connection is fast.  If the DataDomain is connected over FC you must explicitly define gateway server and said server must have access to the Data Domain appliance over FC.
        • ExaGrid
          • From the Repository server drop down select the ExaGrid appliance you wish to use.  If it isn’t added you must add it with the ‘Add New’ button.
        • StoreOnce
          • Specify your connection settings to the StoreOnce appliance, and selecting ‘Use FC’ if connecting over Fibre Channel.
          • Specify credentials having access to the Catalyst store where you wish to store the backups
          • Select whether to automatically chose a gateway server or explicitly define one.  Again, if using FC you must explicitly define a GW server and it must have access to the FC StoreOnce appliance.
  • Repository – this is where we specify where on the selected repository we wish to store our backups, as well as load control settings.  Again this may be different depending on what type of repository we are adding
    • Location – specify a path to the folder to store backups in.  For DataDomain click Browse and select a location – for StoreOnce, select a Catalyst store from the list.  For Windows/Linux, specify a path.
    • Load Control – limits the number of concurrent tasks and data ingestion rate.  The limiting of read and write data rates applies to the combined rate of both.
      • Advanced presents a number of additional settings to place on the repository.
        • Align backup file data blocks – Veeam will align VM data saved to a backup file at a 4kb block boundary.  Provides better dedup but can result in wasted space depending on storage level of fragmentation.
        • Decompress backup data blocks before storing – This will decompress data before storing it, even if compression is enabled.  A setting that is useful for utilizing compression on a job with deduplication appliances as a target
        • This Repository is Backed by rotated hard drives. – if you plan on using rotated drives.
        • User per-VM backup files – recommended if you use a dedup storage appliance or a repository supporting multiple streams.  Will write data with several streams, one VM per backup file per stream.
        • Deduplicating storage appliances supported by Veeam have the following recommendations
          • Data Domain
            • Align backup file blocks – disabled
            • Decompress Backup data blocks – enabled
            • backed by rotated drives – disabled
            • User Per-VM Backup Files – enabled
          • ExaGrid
            • Align backup file blocks – disabled
            • Decompress Backup data blocks – disabled
            • Backed by Rotated Drives – Disabled
            • Use Per-VM Backup Files – Enabled
            • Limit max concurrent tasks – 1
          • StoreOnce
            • Align backup file blocks – disabled
            • Decompress Backup data blocks – enabled
            • backed by rotated drives – disabled
            • User Per-VM Backup Files – enabled
  • Specify Mount Server settings.
    • From the server list select a mount server to use with the backup repository.  If the desired one is not there we can add it at this point by selecting ‘Add New’
    • Enable vPower NFS server – enforces repository accessible by Veeam vPower NFS, for SureBackup Jobs, virtual labs etc.
      • Folder – specify a folder to be used as the vPower NFS root folder
    • Mount server will not be deployed until after the repository has been fully configured.
    • Ports – allows us to customize the network ports used by the vPower NFS service.  By default these are…
      • RPC port: 6161
      • Mount Port: 1058
      • vPower NFS port: 2049
  • Review settings
    • Here you can review your settings and complete.  There is a couple other options.  If the repository already contains backup files we can select to Import these automatically.  If so, they will display under our Imported Backups.  If there is also guest index files located on the repository we can chose to import these indexes as well.
  • Apply settings and watch as VBR updates the status on all the subtasks it performs

Adding a Scale-Out Backup Repository

Before we get into the process of adding a Scale-Out Backup Repository it’s best to have a little review of some of the requirements and limitations associated with them.  We went over this in Module 3, but for memory purposes let’s list a few of them below…

  • Only Available in Enterprise and Enterprise Plus – Enterprise is limited to 1 SOBR with 3 extents only.
  • If license is downgraded to standard with a SOBR present you will not be able to back up to it, but will be able to perform restores.
  • Cannot use SOBR as a target for Config Backups, Replication jobs, VM Copy Jobs or Endpoint jobs.  If repository contains data from any of these unsupported jobs you will need to retarget the jobs at another repository AND REMOVE DATA from the repository

To add a SOBR right-click on the ‘Scale-out Backup Repositories’ node on the Backup Infrastructure view and select ‘Add Scale-out Backup Repository’ and follow the following configuration steps.

  • Name – Add a name and description for the SOBR
  • Extents – Click ‘Add’ to select the backup repositories that you wish to add as an extent to this SOBR.
    • Advanced Options on this screen include whether to Use Per-VM backup files, and whether or not to perform a full backup when a required extent is offline.  This basically means that if an extent that contains previous files from a backup chain is offline, Veeam will create a full backup file instead of a scheduled incremental.
  • Extents – If we have selected a repository that is already used by jobs of a supported type (backup jobs) or already has supported backup files on it such as VeeamZIP backups you will be prompted to update the jobs/backup to point to the new repository.  Need to click yes here to continue with the creation.
  • Policy – this is where we specify our backup placement policy.  If you can remember back to Module 3 we have two
    • Data locality – stores backup files that belong to the same chain together – full/incremental on the same extent.  Any new backup chains associated, for example a new full and incremental chain could be on the same extent or another extent, so long as the individual full/incremental are together.
    • Performance – stores full and incremental on different extents allowing read/write streams to be optimized to different underlying disks.
      • Performance allows you to restrict which types of backups can be stored on a specific extent in the Advanced settings.  We could place full backups on extent1, and incremental on extent2.  By default, Veeam stores both on the same extents, so long as they are from different chains.
  • Summary – review details and click finish

Extending a SOBR is just a matter of going back into the SOBR properties and adding more extents during the extents step.

Removing extents from a SOBR requires a bit more work as they may contain backup files already. To remove an extent we must follow the following steps

  • Put extent in maintenance mode
    • Click on your SOBR name in the Backup Infrastructure view
    • From the extent list, right-click the desired extent and select ‘Maintenance Mode’
  • evacuate backups from the extent
    • Click on your SOBR name in the Backup Infrastructure View
    • Right click the desired extent and select ‘Evacuate Backups’
  • remove extent from SOBR
    • From within the properties screen of your SOBR select the desired extent and click ‘Remove’
      • Note, if you skipped the ‘Evacuate Backups’ step you will be prompted to do so here.  If you chose not to, you may end up breaking the chain of some restore points.

Adding Backup Repositories with Rotated Drives

Before adding a rotated drive backup repository first attach your external drive to the windows or Linux server you wish to add as a repository and launch the ‘Add New Backup Repository’ wizard, following the below configuration and instructions…

  • Give the repository a name and description
  • Select which server to use as the repository
  • On the server section, click ‘Advanced’ and select ‘This Repository is backed up by rotated hard drives’ and select the volume of your external drive.
  • Follow all other instructions to complete the Simple Backup Repository addition.

VMCE v9 Study Guide Module 4 – Initial Configuration Adding Windows/Linux servers and Backup Proxies

VMCE LogoFinally we are moving on to Module 4 of the Veeam VMCE v9 Study Guide.  In Module 3 we took a look at all of the core components that are required in order to make Veeam Backup & Replication work – in this module we will go one step further and discuss some of the options and features we have when we go through the process of adding these into our Veeam Backup Server

Adding Microsoft Windows Servers

Windows Servers are used for a variety of different roles within VBR.  Before we can assign these roles to the servers however we need to add them into our VBR configuration.  Adding Windows Servers is done through the Backup Infrastructure View on the Microsoft Servers Node (under Managed Servers).  When adding a Microsoft Windows server you need to ensure first that file and printer sharing is enabled on the server – if it isn’t, VBR will be unable to deploy the Veeam Installer service or the Veeam Data Mover service to the server.  To add a Windows server, right-click the ‘Windows Servers’ node and select ‘Add Server’ and follow the following steps and configurations…

  • If prompted, meaning if you used an ‘Add Server’ from anywhere else, select ‘Microsoft Windows’ as your desired server type.
  • Server Name – Specify the servers fqdn or an ip address.  You can also add a description here for future reference.  The default description simply states who added the server and when.
  • Credentials – If you have already stored credentials in VBR and they are valid for this server go ahead and select them.  If not, you are able to click ‘Add’ at this point to add a new set of credentials.  These credentials will be used to deploy both the installer service and the data mover service on the Windows server.
  • Ports – We can also customize any network ports if we would like with this button.  By default the services that may get deployed on a Windows server use the following ports.
    • Veeam Installer Service – 6160
    • Veeam Data Mover Service – 6162
    • Veeam vPower NFS Service – 6161
    • Veeam WAN Accelerator Service – 6164
    • Veeam Mount Server – 6170
  • Ports – Still within this screen we have some Data Transfer options.  The range of ports displayed (default 2500-5000) are used for transmission channels between the source and target servers, with each task utilizing one port.  If you have a small environment, or don’t expect a lot of data traffic you can scale this down to a smaller range of ports.  Just remember that one port = one concurrent task.
  • Ports – Preferred TCP – Also within this screen we can see the ‘Preferred TCP connection role’ section.   This is used if this Windows server is being deployed outside of a NATed environment.  If it was, this server would not be able to initiate a connection to another server on the other side of the NAT.  If this is the case, select the ‘Run server on this side’ checkbox to reverse the direction of the connection.
  • Review – simply shows the status of the options selected.
  • Apply – At this step we can review and monitor the steps that VBR has taken to successfully add the Windows Server.

Adding a Linux Server

Before we can add a Linux Backup Repository we must first add a Linux server into our VBR environment.  Just as with Windows, this is done on the Backup Infrastructure view by right clicking the Linux Server node and selecting Add Server.  The following steps and configurations apply to the addition of Linux servers.

  • Name – provide the FQDN or IP address of the Linux Server – an optional Description can also be specified at this point.
  • SSH Connection – Veeam will deploy the required components to a Linux server through an ssh connection.  At this step we need to provide some credentials that can connect to our desired Linux Server.  If you already have credentials setup we can simply select them from the drop down, or click ‘Add’ to create a new set of credentials.  Note, both username/password and Identity/Pubkey authentication is supported for the ssh credentials.
  • SSH Connection – The advanced section on this screen allows us to further configure how we would like components deployed.  We can specify an ssh timeout value if we please.  By default this is 20000 ms, meaning if a task targeted at this server is inactive after 20000ms, VBR will automatically terminate said task.  Just as with Windows we have the ability to adjust our Data Transfer Options as well, either scaling up or down the port range and in turn scaling up/down our maximum concurrent tasks.  Also, like Windows, we see the ability to select ‘Run server on this side’ if we are deploying outside of a NATed environment.
  • When we move to the next screen we may be prompted to trust the SSH key fingerprint. When we do this, the fingerprint is saved to the Veeam configuration database.  The fingerprint is then used during every communication task between Veeam components and this Linux server to help prevent man in the middle attacks.  If this key gets updated on the Linux server, you will need to return to this servers settings within Veeam and run through the wizard again in order to trust the new fingerprint.
  • After clicking ‘Finish’ we are done.

Adding a VMware Backup Proxy

We already know that our Backup Proxy is used to process and deliver traffic to either another proxy or backup repository.  By building out multiple proxies we are able to split the load across them and in the same time take the data mover load off of our Veeam Backup Server.   Adding a VMware backup proxy is performed through the Backup Infrastructure view on the Backup Proxies node from within the VBR Console with the following steps and configuration options

  • Right-click the Backup Proxies node and select ‘Add VMware Backup Proxy’
  • Server – Chose Server – Select the Windows server you wish to assign the proxy role to – if you haven’t already added your server to the backup infrastructure you are able to select ‘Add New’ at this point to go through the process of Adding a new Windows Server (See above).
  • Server – Description – We also have the option of creating a description here as well, by default this just states who and when added the backup proxy.
  • Server – Transport mode – Select your desired transport mode, meaning how you would like the proxy to read/write the data.    By default , VBR will scan the proxy configuration and it’s connection to datastores in order to determine an optimal transport mode for it, which will be selected automatically upon reaching this screen.  If we need to override this we can by clicking ‘Chose’.   Our options here are Direct Storage Access, Virtual Appliance, or Network.  See Module 3 for more information about how each of these transport mode works.  From within the Options section of our Transport Mode selection we can specify additional options for whichever mode we have selected.
    • For Direct Storage Access and Virtual Appliance modes we can choose to either failover to network mode (default) or not.
    • For Network Mode we can choose to transfer VM data over an encrypted SSL connection by selecting ‘Enable host to proxy traffic encryption in Network mode’.
  • Server – Connected Datastores – Allows us to specify which datastores this proxy has a direct SAN or NFS connection to.  By default Veeam will detect all datastores that the proxy has access to, however if you wanted to limit certain proxies to certain datastores you can do so here.
  • Server – Max Concurrent Tasks – We can specify here the number of tasks that the backup proxy will be able to run conccurrently.  At any time if this number is exceeeded no new tasks will start until one has completed.  Keep in mind that Veeam requires 1 CPU core for 1 task, as well as increasing concurrent tasks has the potential to flood network traffic throughput as well.
  • Traffic Rules – The traffic rules section allows us to utilize throttling rules in order to limit the OUTBOUND traffic rate for the proxy.  These help to manage bandwidth and minimize impact on the network.  These rules are created globally within VBR and will only display here if the proxy ip happens to fall within the range the rule applies to. To view the globally set traffic rules we can click on the ‘Manage network traffic rules’ link below the table displayed or click ‘View’ to view a single rule.  We will go over the traffic rules in a bit more details when we cover off global settings of VBR.
  • Summary – After reviewing the summary select ‘Finish’

At anytime you can come back to the Backup Proxies node and right-click a Backup Proxy to edit it.  We can also Disable Backup Proxies on an individual basis.  When disabled a backup proxy will not be used in any backup jobs that can select it.  If you want to remove a backup proxy that is possible as well.  That said, if the Backup Proxy is explicitly selected in a job, meaning the job does not automatically select proxies, then you will first need to delete the reference to this proxy in the job before the proxy can be removed.  Removing a backup proxy only removes it from the Backup Proxies node, the server will remain in the Windows Servers node.

Adding a Hyper-V Off host proxy

By default, MS Hyper-V hosts perform the role of a proxy – this is called on-host mode.  That said they take up resources that may be needed to run your actual production environment so its best to add Off Host proxies.  We discussed these a bit in Module 3, and if you remember they have the following prerequisites.

  • Windows Server 2008 R2 or higher with Hyper-V role of 2008 R2 or higher installed
  • Must be connected to the shared storage
  • Hardware VSS providers must be installed on host (supplied by vendor)
  • If using CSV, the Hyper-V off host proxy must not be a part of the cluster it is backing up.
  • If backing up SMB3, the local system account on off host proxy must have full access permissions to the file share and must be in the same domain, or in a trusted domain.

To add a Hyper-V off host proxy you need to add the backup proxy role to a Microsoft Windows server within the backup infrastructure utilizing the ‘New Hyper-V Off-Host Backup Proxy’ wizard and the following configuration…

  • Server – select a Windows server to assign the role to, if not listed you can add new at this point.  You can also add a description.  By default, Veeam will automatically detect the connected volumes however if you would like to specify which volumes you want this host to work with you can do so using the ‘Connected Volumes Choose…’ button.  We can also specify the Maximum Concurrent Tasks for this proxy, keeping in mind each proxy requires 1 CPU.
  • In the Traffic Rules selection we can select any rules that will apply to our off host proxy to limit its OUTBOUND traffic rate.  These rules are not created here, they are created globally and only those rules that are applicable to the IP of our proxy are listed.  You can move into the global rules by clicking ‘Manage Network Traffic Rules’ link.
  • Review the summary of task and click ‘Next’ to finish deploying the proxy.

VMCE v9 Study Guide Module 3 – Veeam ONE Components, Prerequisites, and Deployment Scenarios

VMCE LogoAs we continue along the Veeam v9 VMCE Study Guide its time to finish off Module 3 and have a look at Veeam ONE.  For me I don’t have a lot of experience with Veeam ONE so this will be a session I try to focus on throughout this guide!  Just an update, I’ve written and passed my VMCE at this point, so there’s that!  Yay!  Either way I’m going to try to complete any unfinished portions I have in efforts of completeness!  So with that, let’s get going… Veeam ONE relies heavily on a client-server architecture to work.  The architecture of Veeam ONE contains the following components.

Veeam ONE Server

  • The Veeam ONE Server is responsible for gathering all of the data from our virtual environment, vCloud Director and Veeam Backup & Replication servers.  It takes this data and stores it into its SQL database.  Veeam ONE server has a couple of sub components that are broken out as well
    • Monitoring Server
      • Handles the collection of data to present to the Monitor client or web ui.
      • Pulls data from both VMware and Hyper-V as well as Veeam Backup & Replication.
    • Reporting Server
      • Provides a set of dashboards and predefined reports.
      • Verifies configuration issues
      • Track implemented changes in the environment
      • Adhere to best practices and optimize your environment
      • Capacity Management

Veeam ONE Monitor Client

  • The Monitor client connects to the monitoring server and basically monitors your virtual environment.  This allows us to choose our connections to our virtual servers, our backup infrastructure, and manage alarms and data that is being monitored.

Veeam ONE Business View

  • Allows grouping of infrastructure objects into categories that better align to the business
  • Groupings/categories are applied to functionality within Monitor and Reporter
  • Can be synchronized with vSphere tags.

Interesting tidbits in regards to Veeam ONE

  • Can be licensed either per socket or per-VM being monitored

Deployment Models

Veeam ONE provides us with a couple different deployment models

Typical Deployment

Just as VBR gives us the opportunity to consolidate all of the components and services on to one server Veeam ONE does as well.  The typical deployment takes Veeam ONE server, Web UI, and Monitor client and installs them all together on the same machine, be it physical or virtual.  The SQL instance can also be installed on this machine as well – by default, Veeam ONE packages with SQL 2012 Express.  This is a good way to manage a small environment, or to evaluate what Veeam ONE can do for you.  If you need to enable multi-user access to the real-time performance it is possible to install the Veeam ONE monitor client on separate machines.

Your typical installation requires at least 4 cores, 64 bit and 8GB of RAM, although 16 is recommended.  Must be installed on Windows 7 sp1 or above, and supports SQL, both full and express, from 2005 and up.

Advanced Deployment

The advanced deployment starts to break out some of the individual components to different servers.  The Veeam ONE Server, and the WEB UI components are installed on separate machines.  Also, Veeam ONE Monitor client can also be installed on multiple separate machines.  This deployment can still use the express installation of SQL, however since you are most likely breaking out the components in order to decrease load, you will probably want to install a remote instance of SQL server for this type of setup.

The Veeam ONE server requires at least 4 cores, 64 bit, and 8 GB of RAM, although 16 is recommended.  Again, Windows 7 sp1 or above and SQL 2005 and up.

The Web UI server requires minimum 2 cores and only 64 bit OS’s (Win 7 SP1 and up).  2 GB minimum RAM

The Monitor Client requires either 32 or 64 bit OSs (7 SP1 and up) and only 1 socket, along with 1 GB of memory.

Interesting tidbits around Veeam ONE deployments

  • Supports vSphere 4.1 and above
  • Supports Hyper-V 2008 R2 sp1 and above
  • Supports vCloud Director 5.1 and above
  • Integrates with Veeam B&R 7.0 update 4 and above (standard and above)

VMCE v9 Study Guide Module 3 – VBR Prerequisites, Deployment Scenarios & Upgrades

VMCE LogoAs we continue on Module 3 of the Veeam VMCE v9 Study Guide its time to look at VBR prerequisites, the many deployment scenarios available for VBR and finally what upgrade options we have when upgrading Veeam Backup & Replication to version 9.   One of the benefits of deploying Veeam Backup & Replication is that you can make it as simple as you want, or as hard as you want Smile  Veeam makes it very easy to deploy VBR and adapt to any size of environment.  To help break down the scenarios Veeam provides three different types of deployments for VBR; Simple, Advanced and Distributed

Simple Deployment

Basically in the simple deployment we are looking at having only once instance of VBR setup and installed on either a physical or virtual machine within our environment.  In a simple deployment we have basically one server, the Backup Server, which hosts all the roles and components we need to backup our environment.  The Backup server at this point would host the following components

  • Veeam Backup Server – for management
  • Backup Proxy – for moving data
  • Backup Repository – for hosting our backups.
  • Mount Server – for restoration
  • Guest Interaction Proxy

Interesting tidbits about Simple Deployment

  • All components are installed automatically
  • The Backup Repository is determined by scanning the volumes of the machine in which we are installing.  The volume with the greatest free disk space is used with a “Backup” folder created on it.
  • Only used if you are evaluating VBR, or have a small number of VMs you need to protect
  • Suggested to install on a VM (but not required) as it would give you the hot-add backup transfer option.

Advanced Deployment

Advanced Deployment is the way to go if you have an environment of any size to back up.  In these cases we can’t put all the load on the Backup Server as it would be too much for it to handle.  In this deployment model we have the following components

  • Backup Server – Our control plane
  • Backup Proxies – Data mover components on separate servers to handle the transfer of data.
  • Backup repositories – Separate servers containing capacity to store our backup files, VM copies, and replica metadata
  • Dedicated Mount Servers – again, separate components in order to efficiently perform application and file level restore back to original production VMs
  • Dedicated Guest Interaction Proxies – separate components allowing us to efficiently deploy runtime process in our Windows VMs.

Interesting tidbits about advanced deployments

  • Allows us to easily scale up and down to environments by adding more or less components.
  • Backup traffic can be dynamically distributed amongst proxies.
  • Good setup to begin replicating data offsite by deploying proxies in both local and remote sties.
  • Provides HA to our backup jobs by having the ability to allow jobs to failover to other proxies if some become unavailable or overloaded

Distributed Deployment

The distributed deployment is used in cases where environments are spread out geographically with multiple backup servers installed across many locations with the backup servers themselves being federated using Enterprise Manger.  This way jobs can all be managed centrally, as well as providing an easy way to search for and find files across all sites.  This deployment model contains the following components

  • Multiple Veeam Backup Servers for each site
  • Multiple Veeam proxies for each site
  • Multiple repositories located at each site
  • multiple mount servers and guest interaction proxies at each site
  • Veeam Enterprise Manager Server
  • Optional Veeam Backup Search server to streamline search processes.

Interesting tidbits about the distributed model

  • With Enterprise Manager installed, we are able to provide flexible delegation operations to users within the environment to perform restores
  • Centralized license management
  • All the benefits of the advanced model

Upgrading Veeam Backup & Replication to v9

If you have ever had to upgrade an instance of Veeam Backup & Replication you should know that it is a pretty simple product to upgrade – with that said, you should always do your due diligence – backing up your SQL database and Veeam configuration is always a good idea – as well as ensuring you have completely been through all of the release notes.

There are a few limitations and concerns you might want to pay attention to when looking to upgrade to Veeam Backup & Replication v9

  • Supports a direct upgrade from version 7.0 Update 4 and 8.0
  • If you have any Windows 2003 servers acting as backup infrastructure components within your current configuration, they will need to be removed before the upgrade as they aren’t supported – this will cause the upgrade to fail.
  • The first time you connect to your newly upgraded backup server with a client backup console, they will be prompted to apply the update to their console as well.
  • The Console cannot be downgraded
  • The first time you login after the upgrade Veeam will prompt you to update all of the other backup infrastructure in your environment such as proxies, repositories, etc.  These are upgraded in an automated deployment by the Veeam Backup Server.

VMCE v9 Study Guide Module 3 – Remaining Veeam Backup & Replication Core Components

VMCE LogoAside from our proxies and repositories there are number of remaining Veeam Backup & Replication Core Components to cover.  Today we will try and finish the component section of Module 3 of the Veeam VMCE v9 Study Guide.  Some of these components are required, where as some are optional – but all are certainly fair game on the VMCE exam so its best to know them!

Guest Interaction Proxy

During a backup Veeam will interact with the guest to do several things – to do this it deploys a run time process within each VM it is backing up (be it windows or Linux) to do the following options

  • Application Aware Processing
  • Guest File System indexing
  • Transaction Log processing

Older versions all of this was done by the backup server, causing higher resource usage on the Backup server or issues if the backup server and processed VMs had degraded, slow or non-existent network connectivity.  As of 9, the process of doing the above 3 actions and deploying these run-time process can be done with a Guest Interaction Proxy (Windows only, will not work with Linux VMs).   Again, interesting facts about the GIP.

  • Only utilized when processing Windows based VMs.  Linux VMs will still receive these packages from the Backup Server.
  • Only available in Enterprise and Enterprise Plus editions.
  • Can utilize multiple Guest Interaction Proxies to improve performance, recommended to have on at all sites if you have a ROBO setup.
  • Can only be deployed on a Windows based server, be it physical or Virtual.
  • Must have either a LAN or VIX connection to the processed VM.
  • Can be installed on the same server as the proxy, repository, backup server, WAN Accelerator, etc.
  • Defined on the Guest Processing step of the backup/replication job.  We can assign each job manually to use a certain proxy or let Veeam decide.  If letting Veeam automatically determine which proxy to use it will go in the following order
    • A machine in the same network as the protected VM that isn’t the Backup Server
    • A machine in the same network as the protected VM that is the Backup Server
    • A machine in another network as the protected VM that isn’t a Backup Server
    • A machine in another network as the protected VM that is a Backup Server.
    • If at any point it finds more than one meeting the above criteria, it selects the one which is “less loaded”.  The one with the least number of tasks already being performed.
    • If at any point a GIP fails, the job can fail over to the Backup Server and utilize it to perform GIP roles as it has done in previous versions.

Mount Server

A mount server is required in order to restore VM guest OS and application items back to their original locations.  Veeam uses this server to mount the content of the backup file to a staging server, this server, should be located in the same location as the backup repository where the files are stored, if it isn’t you may end up having restorations traverse a WAN twice.  To help prevent this Veeam implements a mount server.

When a file or application item is restored to the original location, Veeam will mount the contents of the backup from the repository onto the mount server, and then copy the data from the mount server to the original location.

Interesting tidbits about mount servers…

  • Direct SQL and Oracle restores do not go through the mount server, they are mounted directly to the target VM.
  • A mount server is created for every backup repository and associated with it.  This is a Repository setting.
  • By default the mount server is created on
    • Backup Repositories – if they are windows based.  The default mount server would be themselves.
    • Backup Server – For any Linux based or shared folder backups, and deduplicating storage devices the mount server is the backup server
    • Veeam Backup & Replication Console – Anywhere the client is installed so is a mount server, however it isn’t automatically registered within B&R
  • Scale-Out Backup Repositories require you to assign a mount server for each and every extent included.
  • Mount servers can only be Windows based, but can be physical or virtual.
  • In order to restore from storage snapshots the mount server must have access to the ESXi host which will host the temporary VM.

WAN Accelerators

WAN acceleration within Veeam works by using dedicated components to globally cache data and deduplicate data between sites.  Basically we would need a WAN accelerator at both our source and target sites to do so.  These sit in between the proxies, meaning data would flow through source backup proxy, then to the source wan accelerator, then to the target wan accelerator, then to the target backup proxy, then to either its replication target or backup repository.

Each accelerator will create a folder called VeeamWAN.  On the source, files and digests required for deduplication are stored here.  On the target, a global cache is stored.

WAN accelerators can require a lot of disk space to hold either the digests or global cache, therefore require some sizing exercises when creating them.  Certainly this depends on the amount of source VMs you are backing up, but a rule of thumb is to provide 20GB of disk space for each TB of VM disk capacity.  On the target we store Global Cache which is a little less lightweight in terms of capacity requirements.  The recommendation here is to provide 10GB of space for each type of OS you are processing – by default, 100GB is allocated, so 10 OSes.  Some situations may require us to utilize extra space on the source accelerators depending if digest data needs to be recalculated or we have cleared the cache.  In order to help suffice this it’s also recommended you provide 20GB per 1 TB of source VM on your target cache as well.

Interesting tidbits about WAN acceleration

  • Must be installed on a 64 bit Windows Based machine, physical or virtual
  • Can be intermingled with other proxies and repositories
  • For digest data on the source accelerator, provide 20GB of space for each 1 TB of data being backed up.
  • For global cache provide 10GB of space for each OS (Default is 100GB)

Veeam Backup Enterprise Manager

This component is optional and is really intended for those that have a distributed deployment containing multiple backup servers.  VEB essentially federates your servers and offers a single pain of glass viewing at your backup servers and their associated jobs.  From here you can do the following

  • Control and Manage jobs
  • Edit and Clone Jobs
  • Monitor job state
  • Report on success/failure across VBR Servers
  • Search for guest OS files across VBR Servers and restore via one-click

Interesting tidbits around VEB

  • Can be installed on either physical or virtual, so long as its windows

Veeam Backup Search

Veeam Backup Search is an option that will greatly help reduce load from the VEB server if you frequently need to search through a number of backups.  Basically, Veeam Backup Search is deployed on a Windows machine running Microsoft Search Server, which basically runs the MOSS Integration service and updates index databases of MSS – leaving VEB the ability to simply pass the Backup Search queries and have the data passed back.

Veeam Gateway Server

The Veeam Gateway server is almost like a connector service, bridging the network between backup proxies and backup repositories.    The only time we would need to deploy a gateway server is if we are using one of the following scenarios

  • Shared Folder backup repositories
  • EMC DataDomain or HPE StoreOnce appliances

ExaGrid, another supported deduplicating appliance with Veeam actually hosts the Veeam Data Mover service directly on the box, Shared Folder backup repositories and the DataDomain/StoreOnce appliances do not – thus, we use a gateway server to host and run the Veeam Data Mover services for them.  The gateway server is configured during the “Add Backup Repository” wizard.   When prompted we can select our gateway server manually, or chose to let Veeam decide the best fit.  If we let Veeam do the choosing our Gateway server is selected following the below criteria

  • For a backup job, the role of the gateway server is assigned to the proxy that was first to process VM data for a backup job.
  • For Backup Copy jobs, the role of the gateway server is assigned to the mount server associated with the backup repository.  If for some reason the mount server is not available this will fail over to any WAN Accelerators that might be used for that job.
  • For Backup to Tape jobs the role of the gateway server is assigned to the Veeam Backup Server.

Veeam will select a different number of gateway servers per job depending on the multitasking settings of the repository – PerVM backup chains by default have multiple write streams, therefore each VM will be assigned a gateway server.  Where as the normal backup chains only have one gateway server assigned.

Tape Server

A tape server in Veeam Backup and Replication is responsible for hosting a tape device.  Simply put its a windows machine that is connected to some sort of tape library.  The tape server takes on somewhat of a proxy role for tapes, performing the reading and writing to tapes.