Monthly Archives: August 2016

Cohesity 3.0 – One platform for all your secondary storage!

logo-cohesity-darkAfter just over half a year of making their 1.0 product generally available Cohesity, a company based out of Santa Clara have announced version 3.0 of their flagship secondary storage products DataProtect and DataPlatform.  I had the chance to take a 1:1 briefing with Cohesity to check out what’s new and find out just what they define secondary storage as and thought I’d try and share my thoughts around the new features and overall solution from Cohesity here…

What is secondary storage?

Before we get too in-depth around the features and benefits of the Cohesity platforms its nice to stop and take a look at just what secondary storage is.  Quite simply, Cohesity sees secondary storage as any storage hosting data that isn’t “mission critical”, and surprisingly they are also discovering that this non “mission critical” data takes up the majority of an organizations overall capacity.  As show below we can see that data such as backups, test/dev, file shares, etc.…  These all fit into the secondary storage profile – data that is rarely used, fragmented and complex to manage, data that Cohesity defines as “Dark Data”

storgageiceberg

All of this “Dark Data” can become a bit of a challenge to manage and maintain – We end up with numerous backups that we don’t touch, we have many appliances and servers within our datacenter performing various functions such as deduplication, compression, analytics, etc.  All of these moving pieces within our datacenter each come with their own cost, their own hardware footprint, and for the most part have no way of interfacing with each other, nor do they have the ability to scale all together.  This is where Cohesity makes it’s play – simplifying secondary storage within your datacenter

Cohesity – All your secondary storage – One Hyperconverged platform

Cohesity moves into the datacenter and aims to eliminate all of those secondary storage silos.  They do this by consolidating your backups, file shares, test/dev copies, etc. and moving them all on to a Cohesity appliance.  To get the data there, Cohesity first leverages their DataProtect platform.  DataProtect provides the means of backup, using seamless integration into your vSphere environment Cohesity starts performing the role of your backup infrastructure.  Utilizing user create polices based on SLA requirements, Cohesity begins on loading your backup data, adhering to specified RPOs, retention policies etc.  From there DataProtect also adds the ability to offload to cloud for archival purposes. Think in terms of offloading certain restore points or aged backup files to Amazon, Azure, or Google.   Once the data resides on a Cohesity appliance a number of benefits are presented to their customers; think analytics, being able to get a Google-like search throughout all of your secondary data, looking for pre-defined templates such as social security numbers or credit card numbers.  DataPlatform also provides the ability to leverage copy data management to quickly spark up exact, isolated copies of our production environment directly on the Cohesity appliance.  This allows for things such as patch management testing, application testing, or development environments to be deployed in a matter of minutes utilizing flash-accelerated technologies on the appliance itself.

dataplatform

Integrating all of these services into one common platform for sure has its benefits – lowering TCO for one, not having to pony up for support and licensing for 4 different platforms is the first thing that comes to mind.  But beyond that it provides savings in terms of OpEx as well – no more do we have to learn how to operate and configure different pieces of software within our environment dealing with our secondary storage.  No more do we have to spend the time copying data between solutions in order to perform various functions and analytics on it.  We can just use one appliance to do it all, scaling as we need by adding nodes into the cluster, and in turn, receiving more compute, memory, and storage capacity, thus increasing performance of the secondary storage environment overall.

So what’s new in 3.0?

As I mentioned before this is Cohesity’s third release in just over half a year.   We saw 1.0 GA in October of 2015, 2.0 not long after that added replication, cloning and SMB support in February of this year, and now we have 3.0 hitting the shelves with the following improvements and features…

  • Physical Windows/Linux Support – perhaps the biggest feature within 3.0 is the ability to now protect our physical Windows and Linux servers with DataProtect.  The same policy based engine can now process those physical servers we have in our environment and allow us to leverage all of the analytics and search capabilities on the data that we have always had.
  • VMware SQL/Exchange/SharePoint Support – As we all know in the world of IT it’s really the application that matters.  3.0 provides the ability to perform application aware backups on our virtualized SQL, Exchange, and SharePoint servers in order to ensure we are getting consistent and reliable backups, which can be restored to any point-in-time, or restoration of individual application objects as well.  3.0 also adds the ability to provide source-side deduplication for these application-aware backups, meaning only unique blocks of data are transferred into the Cohesity platform during a database backup.
  • Search and recovery from Cloud – 3.0 also brings us the ability to perform search capabilities on our data that has been archived to cloud, but more importantly, perform granular object level recovery on that cloud archived data as well.  Meaning the cost of moving data out of the cloud should decrease as we are just moving the data we need.
  • Performance Enhancements – Utilizing a technology based upon parallel ingest, Cohesity can now spread the load of ingesting individual VMs across all the nodes within its’ cluster – resulting in not only a capacity increase when you scale, but also a performance increase.  Also, they have done much work around their file access services, basically doubling the amount of IOPs and throughput.

And to top it all off, Best of VMworld

bovmw

A huge congrats to Cohesity on the announcement revolving around 3.0 but an even huger congrats goes out for the “Best of VMworld 2016” within the Data Protection Category!  If you want to learn more I definitely recommend checking out Cohesity’s  site here, or, if you happen to be at VMworld you have a couple more days to drop in and say Hi at booth #827!

VMCE v9 Study Guide Module 4 – Initial Configuration Adding Windows/Linux servers and Backup Proxies

VMCE LogoFinally we are moving on to Module 4 of the Veeam VMCE v9 Study Guide.  In Module 3 we took a look at all of the core components that are required in order to make Veeam Backup & Replication work – in this module we will go one step further and discuss some of the options and features we have when we go through the process of adding these into our Veeam Backup Server

Adding Microsoft Windows Servers

Windows Servers are used for a variety of different roles within VBR.  Before we can assign these roles to the servers however we need to add them into our VBR configuration.  Adding Windows Servers is done through the Backup Infrastructure View on the Microsoft Servers Node (under Managed Servers).  When adding a Microsoft Windows server you need to ensure first that file and printer sharing is enabled on the server – if it isn’t, VBR will be unable to deploy the Veeam Installer service or the Veeam Data Mover service to the server.  To add a Windows server, right-click the ‘Windows Servers’ node and select ‘Add Server’ and follow the following steps and configurations…

  • If prompted, meaning if you used an ‘Add Server’ from anywhere else, select ‘Microsoft Windows’ as your desired server type.
  • Server Name – Specify the servers fqdn or an ip address.  You can also add a description here for future reference.  The default description simply states who added the server and when.
  • Credentials – If you have already stored credentials in VBR and they are valid for this server go ahead and select them.  If not, you are able to click ‘Add’ at this point to add a new set of credentials.  These credentials will be used to deploy both the installer service and the data mover service on the Windows server.
  • Ports – We can also customize any network ports if we would like with this button.  By default the services that may get deployed on a Windows server use the following ports.
    • Veeam Installer Service – 6160
    • Veeam Data Mover Service – 6162
    • Veeam vPower NFS Service – 6161
    • Veeam WAN Accelerator Service – 6164
    • Veeam Mount Server – 6170
  • Ports – Still within this screen we have some Data Transfer options.  The range of ports displayed (default 2500-5000) are used for transmission channels between the source and target servers, with each task utilizing one port.  If you have a small environment, or don’t expect a lot of data traffic you can scale this down to a smaller range of ports.  Just remember that one port = one concurrent task.
  • Ports – Preferred TCP – Also within this screen we can see the ‘Preferred TCP connection role’ section.   This is used if this Windows server is being deployed outside of a NATed environment.  If it was, this server would not be able to initiate a connection to another server on the other side of the NAT.  If this is the case, select the ‘Run server on this side’ checkbox to reverse the direction of the connection.
  • Review – simply shows the status of the options selected.
  • Apply – At this step we can review and monitor the steps that VBR has taken to successfully add the Windows Server.

Adding a Linux Server

Before we can add a Linux Backup Repository we must first add a Linux server into our VBR environment.  Just as with Windows, this is done on the Backup Infrastructure view by right clicking the Linux Server node and selecting Add Server.  The following steps and configurations apply to the addition of Linux servers.

  • Name – provide the FQDN or IP address of the Linux Server – an optional Description can also be specified at this point.
  • SSH Connection – Veeam will deploy the required components to a Linux server through an ssh connection.  At this step we need to provide some credentials that can connect to our desired Linux Server.  If you already have credentials setup we can simply select them from the drop down, or click ‘Add’ to create a new set of credentials.  Note, both username/password and Identity/Pubkey authentication is supported for the ssh credentials.
  • SSH Connection – The advanced section on this screen allows us to further configure how we would like components deployed.  We can specify an ssh timeout value if we please.  By default this is 20000 ms, meaning if a task targeted at this server is inactive after 20000ms, VBR will automatically terminate said task.  Just as with Windows we have the ability to adjust our Data Transfer Options as well, either scaling up or down the port range and in turn scaling up/down our maximum concurrent tasks.  Also, like Windows, we see the ability to select ‘Run server on this side’ if we are deploying outside of a NATed environment.
  • When we move to the next screen we may be prompted to trust the SSH key fingerprint. When we do this, the fingerprint is saved to the Veeam configuration database.  The fingerprint is then used during every communication task between Veeam components and this Linux server to help prevent man in the middle attacks.  If this key gets updated on the Linux server, you will need to return to this servers settings within Veeam and run through the wizard again in order to trust the new fingerprint.
  • After clicking ‘Finish’ we are done.

Adding a VMware Backup Proxy

We already know that our Backup Proxy is used to process and deliver traffic to either another proxy or backup repository.  By building out multiple proxies we are able to split the load across them and in the same time take the data mover load off of our Veeam Backup Server.   Adding a VMware backup proxy is performed through the Backup Infrastructure view on the Backup Proxies node from within the VBR Console with the following steps and configuration options

  • Right-click the Backup Proxies node and select ‘Add VMware Backup Proxy’
  • Server – Chose Server – Select the Windows server you wish to assign the proxy role to – if you haven’t already added your server to the backup infrastructure you are able to select ‘Add New’ at this point to go through the process of Adding a new Windows Server (See above).
  • Server – Description – We also have the option of creating a description here as well, by default this just states who and when added the backup proxy.
  • Server – Transport mode – Select your desired transport mode, meaning how you would like the proxy to read/write the data.    By default , VBR will scan the proxy configuration and it’s connection to datastores in order to determine an optimal transport mode for it, which will be selected automatically upon reaching this screen.  If we need to override this we can by clicking ‘Chose’.   Our options here are Direct Storage Access, Virtual Appliance, or Network.  See Module 3 for more information about how each of these transport mode works.  From within the Options section of our Transport Mode selection we can specify additional options for whichever mode we have selected.
    • For Direct Storage Access and Virtual Appliance modes we can choose to either failover to network mode (default) or not.
    • For Network Mode we can choose to transfer VM data over an encrypted SSL connection by selecting ‘Enable host to proxy traffic encryption in Network mode’.
  • Server – Connected Datastores – Allows us to specify which datastores this proxy has a direct SAN or NFS connection to.  By default Veeam will detect all datastores that the proxy has access to, however if you wanted to limit certain proxies to certain datastores you can do so here.
  • Server – Max Concurrent Tasks – We can specify here the number of tasks that the backup proxy will be able to run conccurrently.  At any time if this number is exceeeded no new tasks will start until one has completed.  Keep in mind that Veeam requires 1 CPU core for 1 task, as well as increasing concurrent tasks has the potential to flood network traffic throughput as well.
  • Traffic Rules – The traffic rules section allows us to utilize throttling rules in order to limit the OUTBOUND traffic rate for the proxy.  These help to manage bandwidth and minimize impact on the network.  These rules are created globally within VBR and will only display here if the proxy ip happens to fall within the range the rule applies to. To view the globally set traffic rules we can click on the ‘Manage network traffic rules’ link below the table displayed or click ‘View’ to view a single rule.  We will go over the traffic rules in a bit more details when we cover off global settings of VBR.
  • Summary – After reviewing the summary select ‘Finish’

At anytime you can come back to the Backup Proxies node and right-click a Backup Proxy to edit it.  We can also Disable Backup Proxies on an individual basis.  When disabled a backup proxy will not be used in any backup jobs that can select it.  If you want to remove a backup proxy that is possible as well.  That said, if the Backup Proxy is explicitly selected in a job, meaning the job does not automatically select proxies, then you will first need to delete the reference to this proxy in the job before the proxy can be removed.  Removing a backup proxy only removes it from the Backup Proxies node, the server will remain in the Windows Servers node.

Adding a Hyper-V Off host proxy

By default, MS Hyper-V hosts perform the role of a proxy – this is called on-host mode.  That said they take up resources that may be needed to run your actual production environment so its best to add Off Host proxies.  We discussed these a bit in Module 3, and if you remember they have the following prerequisites.

  • Windows Server 2008 R2 or higher with Hyper-V role of 2008 R2 or higher installed
  • Must be connected to the shared storage
  • Hardware VSS providers must be installed on host (supplied by vendor)
  • If using CSV, the Hyper-V off host proxy must not be a part of the cluster it is backing up.
  • If backing up SMB3, the local system account on off host proxy must have full access permissions to the file share and must be in the same domain, or in a trusted domain.

To add a Hyper-V off host proxy you need to add the backup proxy role to a Microsoft Windows server within the backup infrastructure utilizing the ‘New Hyper-V Off-Host Backup Proxy’ wizard and the following configuration…

  • Server – select a Windows server to assign the role to, if not listed you can add new at this point.  You can also add a description.  By default, Veeam will automatically detect the connected volumes however if you would like to specify which volumes you want this host to work with you can do so using the ‘Connected Volumes Choose…’ button.  We can also specify the Maximum Concurrent Tasks for this proxy, keeping in mind each proxy requires 1 CPU.
  • In the Traffic Rules selection we can select any rules that will apply to our off host proxy to limit its OUTBOUND traffic rate.  These rules are not created here, they are created globally and only those rules that are applicable to the IP of our proxy are listed.  You can move into the global rules by clicking ‘Manage Network Traffic Rules’ link.
  • Review the summary of task and click ‘Next’ to finish deploying the proxy.

Veeam announces new Availability Platform; moves into SaaS space with Office 365 Backup!

Today during Veeam’s “Next Big Thing” event they announced a new all-encompassing Availability Platform; by leveraging and adding new features already existing products (Veeam Backup & Replication, Veeam Cloud Connect and Veeam ONE) along with tying in some newly announced products (Veeam Backup for Office 365, Veeam Availability Console), and adding in some new feature-packed versions of their products supporting physical systems (Veeam Agents for Linux/Windows) Veeam is set to deliver an all-encompassing product to customers of any size, small or enterprise, ensuring that all their data is protected and available no matter where it may reside.

veeamavailabilityplatform

Although the event was entitled “Next Big Thing” it really should have been plural (Things) as a lot was announced, released, and talked about.  If we look at the above graphical representation of the platform we see a number of products that we may not recognize; ie The Veeam Availability Console, Veeam Availability Orchestrator, Veeam Agents???  You may not recognize these, some are new, some are re branded, let me try to summarize all the announcements as best I can…

Veeam Backup for Office 365

So this one isn’t even shown in the platform graphic but hey, no point in beating around the bush here – this is probably the announcement I’m most excited about.  As a customer I was ecstatic when Veeam announced their support for Microsoft Exchange – as an admin, I could now process my Exchange backups and perform granular restores right down to item level such as individual messages right back into my co-workers mailboxes!  It was awesome!  Then, something happened – the way organizations started thinking about delivering email changed – Being in education it was a pretty easy decision to simply move into Office 365 – the price was right 🙂  No longer do we have to maintain 7 or 8 servers just to run our email system – put it in the cloud, set it and forget it!  That said, being in the cloud is great and all – but when those high level executives accidentally delete that important email where do you think they will run to?  No matter what as IT we will still be the ones responsible, and in some cases, the ones who take the blame if we can’t restore something – it doesn’t matter that it’s in the cloud or its out of our hands – it’s an IT issue!

o365

That’s why when Veeam announced support today for Office 365 I immediately started perusing around looking for some sort of beta list!  Bringing the same functionality that they have for on premises exchange environments to Office 356 is awesome!  Want to use the explorers? Sure!  Need to restore individual emails/mailboxes/folders?  Veeam Backup for Microsoft Office 365 is aimed to be released Q4 of this year, but here is the best part – if you are a Veeam Availability Suite customer or a Veeam Backup & Replication Enterprise Plus customer you can get your first three-year subscription to this product absolutely free.  For those running Enterprise or Standard don’t feel ignored – you can pick up a free 1 year subscription!

Veeam Availability Console

2 years ago at VeeamON we saw Veeam Endpoint Backup announced – a free product that we could use to back up our Windows endpoints.  There was always some “give” within the support for the product as the messaging was always “Backup your client endpoints AND those few SERVERS you have still running physical workloads”.  Although we initially saw some integration into Veeam Backup & Replication there was never really a true management interface to handle these backups or deploy configurations to endpoints we wanted to process.  This is where our Veeam Availability Console comes into play – think of this as, dare I say, the single pane of glass to manage your Veeam environment, both VBR jobs as well as jobs from the Veeam Agents for Windows and Linux- whether these workloads and backups are on-premises, or in the cloud!

vac

The Veeam Availability Console is a cloud-enabled platform, allowing both enterprises and service providers to streamline their Veeam deployments, and manage all of those remote environments, providing the framework for managing all licensed components of the Veeam Availability Platform.  Think of managing your physical and virtual backups, backups from VMs running in the cloud, and being able to restore these to your environment, or directly to an Azure instance!

As far as who this is targeted at service providers comes to mind – those Veeam Cloud Connect providers certainly can benefit from this!  But aside from the obvious Veeam is making this available to enterprise deployments as well.  For those with a lot of endpoints or a lot of distributed deployments of Veeam Backup & Replication this can be a great fit into their organization, providing that single place to go to manage all of you remote and branch office deployments, essentially making YOU a Veeam Cloud Connect provider for your business!  Veeam Availability Console is expected to be released Q1 2017!

Veeam Agent for Windows/Linux

Staying with the theme of physical support we saw the Veeam Endpoint Backup product get a face lift today as well – to keep up with its Linux counterpart, Veeam Endpoint Backup will now be known as Veeam Agent for Windows.  That said rebrand/renames, not to exciting – new features and subsequent versions are – so let’s talk about those!  Veeam Agent for Windows/Linux will now come packaged in three different versions – the free version as it stands today will remain their – always free, however Veeam has added a Workstation version along with a Server version to compliment the functionality provided.  Cleverly, Workstation will target those looking to back up, well, workstations and Server will support those looking to back up servers, adding certain features to the new versions to provide enterprise functionality into the products.  Think of things like Application Aware processing to get those consistent backups, transaction log processing to protect those physical SQL servers, Guest file indexing to provide a fast search capability for finding and restoring files.  These are the types of features that will now be available in either the Workstation or Server versions of the Veeam Agents.  Along with those features we also see a couple of new benefits in the newly released versions; the first being the Configuration and Management API – Veeam Agents licensed with Workstation or Server will now expose an API allowing customers to centrally deploy the products, complete with a backup job configured to their endpoints and servers. (Think management from the Availability Console here).  Also we see a backup cache – meaning, backups can be run and end users can stay protected even if their backup target or repository isn’t within reach.  Think of your CEO on a plane if you will, working on a very important (yet very boring) spreadsheet.  They make some changes and somehow end up losing the file – Veeam Agent for Windows could still process this backup from 15000 feet, just caching it locally on the workstation while the target was offline, and in turn moving it to the repository when it does become available.  Meaning we are protected even when we are remote!  A small but mighty feature that I’m sure will save a lot of headaches for a lot of IT admins.

veeameditions

Also, as with any paid version of a product we now see complete enterprise technical support for the Veeam agents!  Veeam hasn’t forgotten about that Free product either – along with adding features to the Workstation and Server versions we see some new enhancements to the Free edition as well – Windows 2016 support, Direct restore to Azure, and Direct restore to Hyper-V just to name a few.  Veeam Agents will be licensed per agent, with an annual subscription model!  We can expect the Linux and Windows agent to be released in November and December of this year respectively!

Veeam Availability Orchestrator

Although Veeam Availability Orchestrator (VAO) has already been announced we’ve yet to see any sort of glimpse into what the product can do.  Today that all changed.  We saw how VAO can take those DR plans that we have in place and essentially test, execute, and maintain them for us.  VAO is truly a multi hyper-visor DR machine for your organization that provides a lot of features needed to be successful when you need to be the most.

Utilizing technologies such as vPower and SureBackup/SureReplica VAO can non disruptively test our disaster recovery plan and workflow – eliminating the need for time-consuming, expensive, manual processes and ensuring things will work just as you planned.

vao

In terms of documentation have you updated your DR plan every single time you add a new service or VM, do you ensure that all the steps are properly changed when you change something within your environment?  If you answered yes then I praise you but I know I surely have not – I’ll revisit it during that quarterly review time scheduled on my calendar and just hope nothing happens between now and then – not the best strategy!  VAO solves this issue by automatically producing DR documentation, dynamically and on the fly, ensuring you always have the most up to date documentation and are in complete compliance with your DR requirements when “push comes to shove”!  VAO, which will be licensed per VM with an annual subscription is targeted to hit the market sometime in Q1 2017 with a beta sometime next month.  Be the first to know here.

But what about the Veeam Availability Suite?

Oh yeah – less we forget these products!  Veeam has been slowly announcing features for their next release of their flagship software, Veeam Backup & Replication v9.5.  We have already been notified of integration into Nimble Arrays, Direct Restore to Azure, full Windows Server 2016 support and enhanced VMware vCloud Director integration but today Veeam announced perhaps some of the most interesting and exciting features to ship with version 9.5!

ReFS Integration for VBR – as we all know ReFS is Microsoft’s “next gen” file-system, with version 3.0 set to ship with Server 2016 when it’s released!  To be honest I’ve not done enough homework on ReFS to delve deep into details of how it works but what I do know is that it includes a number of automatic integrity checks and data scrubbing operations built into the filesystem, as well as some interesting features when it comes to failure and redundancy.  But, the feature most useful to Veeam customers will be based around how ReFS provides and allocate on write model for disk updates.  Think of your repositories here – when using NTFS as an underlying repository when creating a synthetic full, Veeam actually creates a new full backup file out of previous backup chains (full and incrementals) on disk without having to transfer production datastore data.  To do this, it needs space, it needs space to create a temporary full backup file and merge incrementals into it, almost duplicating the size on disk required.  ReFS handles this a bit differently – utilizing APIs provided by Microsoft, and integration into the filesystem provided by Veeam, Veeam is able to leverage ReFS in a way that i can move metadata pointers around, eliminating the need to actually duplicate data, both saving capacity and increasing performance DRAMATICALLY  when creating synthetic full backups.  Backup & Replication v9.5 introduces this technology as fast cloning!!!  And I know I’ve mentioned a Windows specific feature here, but since it’s a feature implemented on the repository, both Hyper-V and VMware customers will be able to take advantage of this!

Enterprise Scalability Enhancements – Many enhancements have been made to the VBR processing engine, providing even more backup and VM restore acceleration technologies helping you to get to that infamous low RTPO Veeam provides.

Veeam ONE charge back – Veeam ONE has always done a great job on reporting on resource consumption and capacity planning!  Now with 9.5 we will see charge back functionality built into the product!  The charge back will be available across all platforms Veeam ONE supports, bringing it to a Hyper-V, VMware, or vCloud Director environment near you!

v10 feature revealed – wait what!?!  We haven’t even seen v9.5 released yet!  Veeam seem to be starting to reveal more of a long-term strategy here!  Anyways, we have seen yet another storage integration provided by Veeam, this time in v10, and with IBM.  Tech previews of v10 will be available this coming May at VeeamON in New Orleans!

Release date – Perhaps the most important piece of information – VBR 9.5 will be here October 2016!!!  Be the first to know when it breaks into the market by signing up here.

Needless to say there were a lot of announcements today!  In the days to come I’m sure we will see more and more technical details around these products, how the work, how they will be priced and when they will come out – but for now if you want to see the announcements yourself I recommend taking a look at the Veeam blog!  Thanks for reading!

VMCE v9 Study Guide Module 3 – Veeam ONE Components, Prerequisites, and Deployment Scenarios

VMCE LogoAs we continue along the Veeam v9 VMCE Study Guide its time to finish off Module 3 and have a look at Veeam ONE.  For me I don’t have a lot of experience with Veeam ONE so this will be a session I try to focus on throughout this guide!  Just an update, I’ve written and passed my VMCE at this point, so there’s that!  Yay!  Either way I’m going to try to complete any unfinished portions I have in efforts of completeness!  So with that, let’s get going… Veeam ONE relies heavily on a client-server architecture to work.  The architecture of Veeam ONE contains the following components.

Veeam ONE Server

  • The Veeam ONE Server is responsible for gathering all of the data from our virtual environment, vCloud Director and Veeam Backup & Replication servers.  It takes this data and stores it into its SQL database.  Veeam ONE server has a couple of sub components that are broken out as well
    • Monitoring Server
      • Handles the collection of data to present to the Monitor client or web ui.
      • Pulls data from both VMware and Hyper-V as well as Veeam Backup & Replication.
    • Reporting Server
      • Provides a set of dashboards and predefined reports.
      • Verifies configuration issues
      • Track implemented changes in the environment
      • Adhere to best practices and optimize your environment
      • Capacity Management

Veeam ONE Monitor Client

  • The Monitor client connects to the monitoring server and basically monitors your virtual environment.  This allows us to choose our connections to our virtual servers, our backup infrastructure, and manage alarms and data that is being monitored.

Veeam ONE Business View

  • Allows grouping of infrastructure objects into categories that better align to the business
  • Groupings/categories are applied to functionality within Monitor and Reporter
  • Can be synchronized with vSphere tags.

Interesting tidbits in regards to Veeam ONE

  • Can be licensed either per socket or per-VM being monitored

Deployment Models

Veeam ONE provides us with a couple different deployment models

Typical Deployment

Just as VBR gives us the opportunity to consolidate all of the components and services on to one server Veeam ONE does as well.  The typical deployment takes Veeam ONE server, Web UI, and Monitor client and installs them all together on the same machine, be it physical or virtual.  The SQL instance can also be installed on this machine as well – by default, Veeam ONE packages with SQL 2012 Express.  This is a good way to manage a small environment, or to evaluate what Veeam ONE can do for you.  If you need to enable multi-user access to the real-time performance it is possible to install the Veeam ONE monitor client on separate machines.

Your typical installation requires at least 4 cores, 64 bit and 8GB of RAM, although 16 is recommended.  Must be installed on Windows 7 sp1 or above, and supports SQL, both full and express, from 2005 and up.

Advanced Deployment

The advanced deployment starts to break out some of the individual components to different servers.  The Veeam ONE Server, and the WEB UI components are installed on separate machines.  Also, Veeam ONE Monitor client can also be installed on multiple separate machines.  This deployment can still use the express installation of SQL, however since you are most likely breaking out the components in order to decrease load, you will probably want to install a remote instance of SQL server for this type of setup.

The Veeam ONE server requires at least 4 cores, 64 bit, and 8 GB of RAM, although 16 is recommended.  Again, Windows 7 sp1 or above and SQL 2005 and up.

The Web UI server requires minimum 2 cores and only 64 bit OS’s (Win 7 SP1 and up).  2 GB minimum RAM

The Monitor Client requires either 32 or 64 bit OSs (7 SP1 and up) and only 1 socket, along with 1 GB of memory.

Interesting tidbits around Veeam ONE deployments

  • Supports vSphere 4.1 and above
  • Supports Hyper-V 2008 R2 sp1 and above
  • Supports vCloud Director 5.1 and above
  • Integrates with Veeam B&R 7.0 update 4 and above (standard and above)

VMCE v9 Study Guide Module 3 – VBR Prerequisites, Deployment Scenarios & Upgrades

VMCE LogoAs we continue on Module 3 of the Veeam VMCE v9 Study Guide its time to look at VBR prerequisites, the many deployment scenarios available for VBR and finally what upgrade options we have when upgrading Veeam Backup & Replication to version 9.   One of the benefits of deploying Veeam Backup & Replication is that you can make it as simple as you want, or as hard as you want Smile  Veeam makes it very easy to deploy VBR and adapt to any size of environment.  To help break down the scenarios Veeam provides three different types of deployments for VBR; Simple, Advanced and Distributed

Simple Deployment

Basically in the simple deployment we are looking at having only once instance of VBR setup and installed on either a physical or virtual machine within our environment.  In a simple deployment we have basically one server, the Backup Server, which hosts all the roles and components we need to backup our environment.  The Backup server at this point would host the following components

  • Veeam Backup Server – for management
  • Backup Proxy – for moving data
  • Backup Repository – for hosting our backups.
  • Mount Server – for restoration
  • Guest Interaction Proxy

Interesting tidbits about Simple Deployment

  • All components are installed automatically
  • The Backup Repository is determined by scanning the volumes of the machine in which we are installing.  The volume with the greatest free disk space is used with a “Backup” folder created on it.
  • Only used if you are evaluating VBR, or have a small number of VMs you need to protect
  • Suggested to install on a VM (but not required) as it would give you the hot-add backup transfer option.

Advanced Deployment

Advanced Deployment is the way to go if you have an environment of any size to back up.  In these cases we can’t put all the load on the Backup Server as it would be too much for it to handle.  In this deployment model we have the following components

  • Backup Server – Our control plane
  • Backup Proxies – Data mover components on separate servers to handle the transfer of data.
  • Backup repositories – Separate servers containing capacity to store our backup files, VM copies, and replica metadata
  • Dedicated Mount Servers – again, separate components in order to efficiently perform application and file level restore back to original production VMs
  • Dedicated Guest Interaction Proxies – separate components allowing us to efficiently deploy runtime process in our Windows VMs.

Interesting tidbits about advanced deployments

  • Allows us to easily scale up and down to environments by adding more or less components.
  • Backup traffic can be dynamically distributed amongst proxies.
  • Good setup to begin replicating data offsite by deploying proxies in both local and remote sties.
  • Provides HA to our backup jobs by having the ability to allow jobs to failover to other proxies if some become unavailable or overloaded

Distributed Deployment

The distributed deployment is used in cases where environments are spread out geographically with multiple backup servers installed across many locations with the backup servers themselves being federated using Enterprise Manger.  This way jobs can all be managed centrally, as well as providing an easy way to search for and find files across all sites.  This deployment model contains the following components

  • Multiple Veeam Backup Servers for each site
  • Multiple Veeam proxies for each site
  • Multiple repositories located at each site
  • multiple mount servers and guest interaction proxies at each site
  • Veeam Enterprise Manager Server
  • Optional Veeam Backup Search server to streamline search processes.

Interesting tidbits about the distributed model

  • With Enterprise Manager installed, we are able to provide flexible delegation operations to users within the environment to perform restores
  • Centralized license management
  • All the benefits of the advanced model

Upgrading Veeam Backup & Replication to v9

If you have ever had to upgrade an instance of Veeam Backup & Replication you should know that it is a pretty simple product to upgrade – with that said, you should always do your due diligence – backing up your SQL database and Veeam configuration is always a good idea – as well as ensuring you have completely been through all of the release notes.

There are a few limitations and concerns you might want to pay attention to when looking to upgrade to Veeam Backup & Replication v9

  • Supports a direct upgrade from version 7.0 Update 4 and 8.0
  • If you have any Windows 2003 servers acting as backup infrastructure components within your current configuration, they will need to be removed before the upgrade as they aren’t supported – this will cause the upgrade to fail.
  • The first time you connect to your newly upgraded backup server with a client backup console, they will be prompted to apply the update to their console as well.
  • The Console cannot be downgraded
  • The first time you login after the upgrade Veeam will prompt you to update all of the other backup infrastructure in your environment such as proxies, repositories, etc.  These are upgraded in an automated deployment by the Veeam Backup Server.

VMCE v9 Study Guide Module 3 – Remaining Veeam Backup & Replication Core Components

VMCE LogoAside from our proxies and repositories there are number of remaining Veeam Backup & Replication Core Components to cover.  Today we will try and finish the component section of Module 3 of the Veeam VMCE v9 Study Guide.  Some of these components are required, where as some are optional – but all are certainly fair game on the VMCE exam so its best to know them!

Guest Interaction Proxy

During a backup Veeam will interact with the guest to do several things – to do this it deploys a run time process within each VM it is backing up (be it windows or Linux) to do the following options

  • Application Aware Processing
  • Guest File System indexing
  • Transaction Log processing

Older versions all of this was done by the backup server, causing higher resource usage on the Backup server or issues if the backup server and processed VMs had degraded, slow or non-existent network connectivity.  As of 9, the process of doing the above 3 actions and deploying these run-time process can be done with a Guest Interaction Proxy (Windows only, will not work with Linux VMs).   Again, interesting facts about the GIP.

  • Only utilized when processing Windows based VMs.  Linux VMs will still receive these packages from the Backup Server.
  • Only available in Enterprise and Enterprise Plus editions.
  • Can utilize multiple Guest Interaction Proxies to improve performance, recommended to have on at all sites if you have a ROBO setup.
  • Can only be deployed on a Windows based server, be it physical or Virtual.
  • Must have either a LAN or VIX connection to the processed VM.
  • Can be installed on the same server as the proxy, repository, backup server, WAN Accelerator, etc.
  • Defined on the Guest Processing step of the backup/replication job.  We can assign each job manually to use a certain proxy or let Veeam decide.  If letting Veeam automatically determine which proxy to use it will go in the following order
    • A machine in the same network as the protected VM that isn’t the Backup Server
    • A machine in the same network as the protected VM that is the Backup Server
    • A machine in another network as the protected VM that isn’t a Backup Server
    • A machine in another network as the protected VM that is a Backup Server.
    • If at any point it finds more than one meeting the above criteria, it selects the one which is “less loaded”.  The one with the least number of tasks already being performed.
    • If at any point a GIP fails, the job can fail over to the Backup Server and utilize it to perform GIP roles as it has done in previous versions.

Mount Server

A mount server is required in order to restore VM guest OS and application items back to their original locations.  Veeam uses this server to mount the content of the backup file to a staging server, this server, should be located in the same location as the backup repository where the files are stored, if it isn’t you may end up having restorations traverse a WAN twice.  To help prevent this Veeam implements a mount server.

When a file or application item is restored to the original location, Veeam will mount the contents of the backup from the repository onto the mount server, and then copy the data from the mount server to the original location.

Interesting tidbits about mount servers…

  • Direct SQL and Oracle restores do not go through the mount server, they are mounted directly to the target VM.
  • A mount server is created for every backup repository and associated with it.  This is a Repository setting.
  • By default the mount server is created on
    • Backup Repositories – if they are windows based.  The default mount server would be themselves.
    • Backup Server – For any Linux based or shared folder backups, and deduplicating storage devices the mount server is the backup server
    • Veeam Backup & Replication Console – Anywhere the client is installed so is a mount server, however it isn’t automatically registered within B&R
  • Scale-Out Backup Repositories require you to assign a mount server for each and every extent included.
  • Mount servers can only be Windows based, but can be physical or virtual.
  • In order to restore from storage snapshots the mount server must have access to the ESXi host which will host the temporary VM.

WAN Accelerators

WAN acceleration within Veeam works by using dedicated components to globally cache data and deduplicate data between sites.  Basically we would need a WAN accelerator at both our source and target sites to do so.  These sit in between the proxies, meaning data would flow through source backup proxy, then to the source wan accelerator, then to the target wan accelerator, then to the target backup proxy, then to either its replication target or backup repository.

Each accelerator will create a folder called VeeamWAN.  On the source, files and digests required for deduplication are stored here.  On the target, a global cache is stored.

WAN accelerators can require a lot of disk space to hold either the digests or global cache, therefore require some sizing exercises when creating them.  Certainly this depends on the amount of source VMs you are backing up, but a rule of thumb is to provide 20GB of disk space for each TB of VM disk capacity.  On the target we store Global Cache which is a little less lightweight in terms of capacity requirements.  The recommendation here is to provide 10GB of space for each type of OS you are processing – by default, 100GB is allocated, so 10 OSes.  Some situations may require us to utilize extra space on the source accelerators depending if digest data needs to be recalculated or we have cleared the cache.  In order to help suffice this it’s also recommended you provide 20GB per 1 TB of source VM on your target cache as well.

Interesting tidbits about WAN acceleration

  • Must be installed on a 64 bit Windows Based machine, physical or virtual
  • Can be intermingled with other proxies and repositories
  • For digest data on the source accelerator, provide 20GB of space for each 1 TB of data being backed up.
  • For global cache provide 10GB of space for each OS (Default is 100GB)

Veeam Backup Enterprise Manager

This component is optional and is really intended for those that have a distributed deployment containing multiple backup servers.  VEB essentially federates your servers and offers a single pain of glass viewing at your backup servers and their associated jobs.  From here you can do the following

  • Control and Manage jobs
  • Edit and Clone Jobs
  • Monitor job state
  • Report on success/failure across VBR Servers
  • Search for guest OS files across VBR Servers and restore via one-click

Interesting tidbits around VEB

  • Can be installed on either physical or virtual, so long as its windows

Veeam Backup Search

Veeam Backup Search is an option that will greatly help reduce load from the VEB server if you frequently need to search through a number of backups.  Basically, Veeam Backup Search is deployed on a Windows machine running Microsoft Search Server, which basically runs the MOSS Integration service and updates index databases of MSS – leaving VEB the ability to simply pass the Backup Search queries and have the data passed back.

Veeam Gateway Server

The Veeam Gateway server is almost like a connector service, bridging the network between backup proxies and backup repositories.    The only time we would need to deploy a gateway server is if we are using one of the following scenarios

  • Shared Folder backup repositories
  • EMC DataDomain or HPE StoreOnce appliances

ExaGrid, another supported deduplicating appliance with Veeam actually hosts the Veeam Data Mover service directly on the box, Shared Folder backup repositories and the DataDomain/StoreOnce appliances do not – thus, we use a gateway server to host and run the Veeam Data Mover services for them.  The gateway server is configured during the “Add Backup Repository” wizard.   When prompted we can select our gateway server manually, or chose to let Veeam decide the best fit.  If we let Veeam do the choosing our Gateway server is selected following the below criteria

  • For a backup job, the role of the gateway server is assigned to the proxy that was first to process VM data for a backup job.
  • For Backup Copy jobs, the role of the gateway server is assigned to the mount server associated with the backup repository.  If for some reason the mount server is not available this will fail over to any WAN Accelerators that might be used for that job.
  • For Backup to Tape jobs the role of the gateway server is assigned to the Veeam Backup Server.

Veeam will select a different number of gateway servers per job depending on the multitasking settings of the repository – PerVM backup chains by default have multiple write streams, therefore each VM will be assigned a gateway server.  Where as the normal backup chains only have one gateway server assigned.

Tape Server

A tape server in Veeam Backup and Replication is responsible for hosting a tape device.  Simply put its a windows machine that is connected to some sort of tape library.  The tape server takes on somewhat of a proxy role for tapes, performing the reading and writing to tapes.

Rubrik Firefly – Now with physical, edge, and moar cloud!

Rubrik LogoRubrik, the Palo Alto based company who strives to simplify data protection within the enterprise has recently announced a series C worth a cool 61 million, doubling their total capital to a cool 112 million since founding just over a couple of years ago!  And as much as I love to hear about venture capital and money and whatnot I’m much more into the tech as I’m sure my readers are as well!  With that, alongside that Series C announcement comes a new release of their product, dubbed Rubrik Firefly!

Rubrik Firefly – A Cloud Data Management Platform

With this third major release from Rubrik comes a bit of a rebrand if you will – a cloud data management platform.  Nearly all organizations today have some sort of cloud play in their business; whether that be to build out a private cloud and support legacy applications or consume public cloud resources for cloud native applications – they all have some kind of initiative within their business that aligns with cloud.  The problem Rubrik sees here is that the data management and data protection solutions running within those business simply don’t scale to match what the cloud offers.  Simply put,  customers need to be able to manage, secure, and protect their data no matter where it sits – onsite, offsite, cloud, no matter what stage of cloud they are at – thus spawning the Cloud Data Management Platform

Rubrik Firefly Cloud Data Management

So what’s new?

Aside from a number of improvements and enhancements Rubrik Firefly brings a few big new features to the table; Physical Workloads, Edge Environments, and spanning across clouds.  Let’s take a look at each in turn…

Physical Workloads

I had a chance to see Rubrik a way back at Virtualization Field Day 5 where we got a sneak peek at their roadmap – at the time they supported vSphere only and had no immediate plans for physical workloads.  The next time they showed up at Tech Field Day 10 they actually had a bit of a tech preview of their support for physical MSSQL support – and today that has become a reality.  As you can see they are moving very fast with development of some of these features!   Rubrik Firefly adds official support for those physical SQL servers that you have in your environment, you know, the ones that take up so much resources that the DBA’s just will not let you virtualize.  Rubrik can now back these up in an automated, forever incremental fashion and give you same easy of use, efficiency, and policy based environment that you have within your virtual workload backups.  Firefly does this by deploying a lightweight Windows service, the Rubrik Connector Service onto your SQL server, allowing you to perform point in time restores and log processing through the same UI you’ve come to know with Rubrik.   Aside from deploying the service everything else is exactly the same – we still have SLA policy engine, SLA domains, etc.

And they don’t stop at just SQL!  Rubrik Firefly offers the same type of support for those physical Linux workloads you have lying around.  Linux is connected into Rubrik through  an rpm package, allowing for ease of deployment – From there Rubrik pulls in a list of files and directories on the machine, and again, provides the same policy based approach as to what to back up, when to back it up, and where to store it!

Both the SQL msi installer and the Linux rpm packaged are fingerprinted to the Rubrik cluster that creates them – allowing you to ensure you are only processing backups from the boxes you allow.

Edge Support

Although Rubrik is shipped as a physical appliance we all know that this is a software based world – and that doesn’t change with Rubrik.  The real value in Rubrik is the way the software works!  Rubrik has taken their software and bundled it up into a virtual appliance aimed for Remote/Branch Offices.  What this does is allow those enterprises with remote or branch offices to deploy a Rubrik instance at each location, all talking back to the mothership if you will at the main office.    This allows for the same policy based approach to be applied to those workloads running at the remote locations, thus allowing things such as replication back to the main office, archive to cloud, etc to be performed on the edge of the business along with at the main office.  The Virtual Appliance is bundled as an ova and sold on a “# of VMs” protected basis – so if you have only a handful of VMs to protect you aren’t paying through the nose to get that protection.

Cloud Spanning

Finally we come to cloud spanning.  Rubrik has always supported AWS as a target for archiving backups and brought us an easy to use efficient way of getting just the pieces of data we need back from AWS – but, we all know that Microsoft has been pushing Azure quite heavily as of late handing out lots and lots of credits!  You can now take those spare credits and put them to good use as Firefly brings in support for Azure blob storage!  The same searching and indexing technology that Rubrik has for Amazon can now be applied to Azure as well, giving customers options as to where they archive their data!

Bonus Feature – Erasure Coding

How about one more?  With the Firefly release Rubrik now utilizes erasure coding, bringing in a number of performance and capacity enhancements to their customers with a simple software upgrade!  Without putting hard numbers to it customers can expect to see a big increase in their free capacity once they perform the non-disruptive switch over to erasure coding!

Firefly seems like a great step towards the cloud data management platform – a topology agnostic approach to wrapping policy around your data, no matter where it is, ensuring it’s protected and secured!  The release of a Virtual Appliance perks my ears up as well – although it’s aimed directly at ROBO deployments now who knows where it might go in the future – perhaps we will see a software-only release of Rubrik someday?!?   If you are interested in learning more Rubrik has a ton of resources on their site – I encourage you to check them out for yourself.  Congratulations Rubrik on the Series C and the new release!

VMCE v9 Study Guide – Module 3 – Core Components – Backup Repository

downloadContinuing along with the core components section of Module 3 we will now look at the backup proxy, both the basic type, as well as the new Scale-Out Backup Repository which was introduced in v9.

So what is a backup repository?

This is where our backup data resides.  Actually holds more than just VM backups – keeps backup chains, VM Copies, and metadata for our replicated VMs.  There are three types of backup repositories in Veeam

1. Simple Backup Repository

Typically a simple backup repository is just a folder or directory located on a the backup storage where we can store our jobs.  We can have multiple backup repositories, and set them up to different jobs in order to limit the number of simultaneous jobs each one is processing, helping to spread the load.  A Simple Backup Repository can be installed on

  • Windows server with local or direct attached storage – storage can be a local disk, direct attached disk (USB Drive) or an iSCSI/FC LUN mounted to the box.  Can be physical or virtual.  When a Windows based repository is added the data mover service is installed and utilized to connect to whatever proxy is sending the backup data, helping to speed up the transfer and processing of data.  Windows repositories can also be configured to run vPower, giving them the ability to mount their backups directly to ESXi hosts over NFS.
  • Linux server with local, DAS, or mounted NFS.  – Similar to that of Windows we can use a Linux instance with directly attached storage, iSCSI/FC LUNs, or mounted NFS shares.  When a task addresses a Linux target, the data mover service is deployed and ran, again establishing a connection to the source proxy.
  • CIFS or SMB share. – an SMB share can be utilized to store your Veeam backups, however it doesn’t have the ability to run the data mover service.  In this case, the gateway server (explained later) will be used to retrieve and write data to the SMB share.  This affects your deployment, you may want to deploy gateway servers offsite if writing to an SMB share at a remote location in order to help performance.
  • Deduplicated storage appliance – Veeam does support EMC Data Domain, ExaGrid and HPE StoreOnce as backup repositories as well.

Interesting tidbits around simple backup repositories

  • Data Domain does not necessarily improve performance, but reduces load on network
  • Data Domain does not support reverse incremental and cannot exceed that of 60 restore points in incremental backup chains.
  • ExaGrid jobs actually achieve a lower deduplication ratio when using multi-task processing.  It’s better to do a single task at a time.
  • When using StoreOnce Veeam needs the CAtalyst agent installed on the gateway proxy.
  • HPE StoreOnce always uses per-vm backup files
  • HPE StoreOnce does not support reverse incremental nor does it support the defrag and compact full backup options.

2. Scale-Out Backup Repository

The scale out backup repository essentially takes several similar simple repositories and groups them together to pool one large backup repository.  This way as you approach your capacity within the SOBR, you can simply add another repository, or extent to the pool, increasing your overall capacity.

When a simple backup repository is added as an extent to a SOBR, Veeam creates a definition.erm file.  This file contains all of the descriptive information about the SOBR and its respective extents.

One setting that must be setup on a SOBR is the Backup file placement policy.  This basically determines how the backup files will be distributed between extents.  There are two Backup file placement polices available

  1. Data Locality
    • All backup files which belong to the same chain will be stored on the same extent.
    • New full backups could reside on another extent, but the incremental thereafter would also be placed on this new extent – where as the old full and old incremental would remain on another extent.
  2. Performance
    • Full and incremental backups that belong to the same chain are stored on different extents.
    • Improves performance on transforms if raw devices are in use as it spreads the I/O load across extents.
    • If an extent is missing containing any part of a targeted backup chain Veeam will not be able to perform the backup.  That said, you can set the ‘Perform full backup when required extent is offline” setting in order to have a full backup performed in the event it can’t piece together the chain, even if an incremental is scheduled.

All this said, the placement policy is not strict – Veeam will always try and complete a backup on another extent with enough free space if an extent is not available, even if you have explicitly said to place full backups on a certain extent.

When selecting extents to place backups, Veeam goes through the following processes.

  1. Looks for availably of extents and their backup files.  If an extent is not available containing part of the chain, Veeam triggers a full backup to a different extent
  2. It then takes into consideration the backup placement policy
  3. Then it looks at free space on the extents – it is placed on the extent with the most free space.
  4. Availability of the backup files form the chain, meaning, an extent that has incremental backups from the current backup chain will have a higher priority than an extent that doesn’t

During the start of a job, Veeam guestimates how much space a backup file will require and compares that to of what is available on the extents.  It does this in a couple of different ways depending on your backup file settings.

  • Per-VM Backup Chains – In determining the full backup file size it calculates by taking 50% of the source VM size.  Incrementals are 10% of the source VM size
  • Single File Backup Chain – The size of the full is equal to 50% of the source VMs in the job.  The first incremental is determined by taking 10% of the source VMs size – subsequent incrementals are equal to that of the size of the incremental before them.

Extents within a SOBR also have some service actions that can be performed as explained below

  • Maintenance Mode – This is mainly used if you need to perform some kind of maintenance on the server hosting the underlying extent such as adding memory or replacing hardware.  When an extent is in maintenance mode you cannot perform any tasks targeted at the extent nor can you restore any data that resides on this extent or backup chains that have data on the extent.  When entering maintenance mode Veeam first checks to see if any jobs are currently using the extent.  If they aren’t, it immediately goes into maintenance mode – if they are, it gets placed into a Maintenance pending state and waits for the tasks to complete, once done, it enters maintenance mode.
  • Backup Files Evacuation – This is used if you would like to remove an extent from a SOBR that contains backup files.  When doing this, Veeam moves the backup files on this extent to other extents that belong to the same SOBR.  Before evacuating, you must first place extents into maintenance mode.  Veeam attempts to abide by its placement policies when looking where to place the evacuated backup files.

Some interesting tidbits around SOBR

  • extents can be mixed and matched, meaning we can have windows repositories, Linux repositories and dedup appliances all providing storage for one SOBR.
  • Used for Backup, Backup Copy, and VeeamZIP jobs only – note the difference – no configuration backups or replication metadata is stored on a SOBR.  If you try and add an extent to a SOBR that is configured inside of any other jobs it will not add – you will first need to target these jobs to another repository.   Further more, if a backup repository is configured as a SOBR extent, you will not be able to use it for any other jobs.
  • Only Available in Enterprise and Enterprise Plus, however Enterprise does have limitations.  Only one SOBR can be created, and can only contain 3 extents.  If you downgrade licenses while you have a SOBR you will still be able to restore from it, but jobs targeted at it will no longer run.
  • When a backup repository is converted to an extent the following information is inherited to the extent
    • Number of Simultaneous tasks
    • Read and write data limit
    • Data compression settings
    • block alignment
    • Limitations on the underlying repository – EMC data domain has a backup chain limit of 60 points, therefore if we use this as an extent in our SOBR, our SOBR will have the same chain limit.
    • Settings that are not inherited include any rotated drive settings as well as Per-VM backup file settings.  Per VM needs to be configured globally on the SOBR.

3. Rotated Drive Backup Repositories

Backup repositories can also use rotated drives.  Think storing backups on external USB drives where you regularly swap these drives in and out to take offsite.  This is setup by using the ‘This repository is backed by rotated drives’ option on the backup repository.

A backup that targets rotated drives goes through the following process.

  1. Veeam creates the backup chain on whatever drive is currently attached
  2. Upon a new session, Veeam checks if the backup chain on the currently connected drive is consistent, meaning it has a full backup as well as subsequent incrementals to restore from.  If the drives had been swapped, or the full/incremental backups are missing from the drive then Veeam will start a new chain, creating a new full backup on the drive which will then be used for subsequent incrementals.  If it is a backup copy job Veeam simply creates a new incremental and adds it to the chain.
  3. For any external drives attached to Windows Servers Veeam will process any outdated restore points from the retention settings and remove them from the drive if need be.
  4. When any original drives get added back into the mix, Veeam repeats this process creating full backups if need be.

Interesting tidbits about repositories backed by rotated drives

  • Veeam can remember and keep track of drives on Windows Servers even if the drive letter changes.  It does this by storing a record about the drive within its configuration database.
    • When a drive is first inserted Veeam has no idea about it, so it must have the exact same letter that is associated in the path to folder setting on the repository.  After this, Veeam stores the information in regards to the drive in the database.
    • After reinserting a drive that is already in the configuration database, Veeam will still use this successfully, even if the drive letter doesn’t match that of the path to folder.
  • GFS Full Backups cannot be created with Backup Copy jobs on rotated drives
  • Per-VM backup files are not supported on rotated drives

VMCE v9 Study Guide – Module 3 – Core Components – Server, Console & Proxy

Veeam Backup & Replication is a very easy application to get up and running – but inside that underlying technology there are a lot of moving parts and components that make it easy.  Let’s have a look at each on and explain what they do as I’m sure you will see questions revolving around the functionality of the components on the exam.

The Backup Server

The backup server is where Veeam is actually installed.  You can think of the Backup Server as being the management plane if you will, coordinating all of the backup jobs, kicking off schedules and instructing other components what to do.  The backup server has a few responsibilities

  • Coordinates all tasks such as backup, replication, recovery verification and restore
  • Controls the scheduling of jobs as well as the allocation of the resources (other components) to those jobs.
  • Central management point for your Veeam environment and maintains global settings for the infrastructure
  • A default backup proxy and backup repository is automatically configured on the server designated as the Backup server.  This allows small environments to get up and running very fast.

The Backup and Replication Console

The B&R console is a client piece of the client/server side application that we use to actually manage our infrastructure.  In order to log into a B&R Server with our console, the user needs to be a member of the local administrators group on the B&R server.  From there, users can be further limited to what they can do using Veeams role functions.

Some interesting and testable tidbits around the console are

  • Multiple users can be logged into a B&R console making changes to the same jobs, however whoever saves their changes first gets priority.  Meaning other users will be prompted to reload their wizards to get most recent changes after that user saves his/her changes.
  • If a session is lost due to network issues, the sessions are maintained for a maximum of 5 minutes.  If the connection is re-established within this time, users are good to.
  • Cannot perform a restore from configuration backup when logged in remotely – must do this directly on the backup server itself.
  • When a console is installed a number of items are also installed by default during the setup process
    • PowerShell Snap-In
    • Explorers for Active Directory, Exchange, Oracle, SQL, and SharePoint
    • A Mount Server (explained later).

The Backup Proxy

The Backup Proxy is the heavy lifter within the Veeam environment.  This handles the movement of data between source and target, whether that be during a backup, a replication, a VM Migration job, or a restore operation – all the data moves through a Veeam Backup Proxy.  As I mentioned earlier a default proxy gets installed on our Backup Server during the initial install – and this may be fine and dandy for a small environment but as you find the need to increase performance, concurrency, and scale you will need to add more backup proxies to your environment.  Interesting tidbits around the backup proxy…

  • Deploys on a Windows machine, can be physical or virtual, and depending on the choice directly affects which backup transport mode is chosen (explained later).  Essentially, you can’t do hot add if your machine is physical, however you may want to leverage physical for something like Direct SAN.
  • Deployment is fully automated and handled by the Backup Server – you just point it towards a server in your infrastructure.

Depending on whether you are deploying Veeam within VMware of Hyper-V a proxy will use a variety of methods to retrieve data, referred to with Veeam as Transport Modes in VMware, and Backup Modes in Hyper-V.  These are defined directly on the proxy properties.

VMware Transport Modes

  • Direct SAN Access
    • This is the quickest processing most which has the least impact on your production environment as it fully offloads the backup processing.
    • Supports block storage only (iSCSI/FC).  When using iSCSI both physical and virtual backup proxies can be deployed.
    • Direct SAN can be used for all operations involving the proxy, both backup and restore.
    • Requirements of Direct SAN Access are…
      • The backup proxy needs to have direct access to the production storage through either a hardware or software HBA.
      • LUNs must be exposed/zoned/presented to the backup proxy performing the Direct SAN Access.  Volumes should be visible in disk management, but not initialized.  Veeam automatically sets a SAN Policy within each proxy to Offline shared to help prevent initialization from occurring.
      • For restore operations the proxy will need to have write access to the LUNs hosting the disks.
    • The process of Direct SAN Access is as follows
      • Backup proxy sends a request to the host to locate the necessary VM on the datastore
      • ESXi host locates VM and retrieves metadata about the layout of the VMs disks on the storage.
      • The host then send data back to the backup proxy via the network
      • The backup proxy uses the metadata to copy the VMs data blocks directly from the SAN.
      • Proxy processes the data and finally sends it to the target.
  • Direct NFS Access (new in v9)
    • Recommended for VMs whose disk reside on NFS datastores.
    • Veeam will bypass the host and read/write directly from the NFS datastores
    • Data still traverses the WAN, however it doesn’t affect the load on the ESXi host.
    • Direct NFS can be used for all operations involving a backup proxy, including backup and restore.
    • Some limitations to DirectNFS exist and are as follows
      • Cannot be used for VMs with a snapshot
      • Cannot be used in conjunction with the VMware tools quiescence option.
      • If source VM contains disk that cannot be processed utilizing Direct NFS, the disk will be processed in Network Mode.
    • The process of Direct NFS is as follows
      • Backup proxy send request to host to locate VM on NFS datastore
      • Host locates VM and retrieves metadata about the layout of the VMs disk on the datastore and sends back to the backup proxy.
      • Backup Proxy uses the metadata to copy VM blocks directly from the NFS datastore, obviously over the LAN – it’s NFS after all.
      • Backup proxy processes data and sends them to the target.
    • Direct NFS Requirements
      • Backup proxy must have access to the NFS datastore
      • If the NFS server is mounted to ESXi hosts using names instead of IPs, the IPs need to be resolvable to names on the Backup Proxy
  • Virtual Appliance Mode (Hot-Add)
    • Easiest mode to set up and can provide a 100% virtual deployment.
    • Provides fast data transfers with any storage
    • Uses existing Windows VMs
    • Utilizes the SCSI/SATA hot-add feature from ESXi to basically attach the source and target disks to backup proxies, thus allowing the proxy to read/write directly from the VMs disk
    • Can be used for all proxy operations, including backup and restore.
    • The process is as follows
      • Backup Proxy sends a request to the host to locate the source VM on the datastore.
      • Host locates VM and reports back
      • Backup Server triggers vSphere to create a VM snapshot of the processed VM and hot-add or directly attach source VM disks to the backup proxy.
      • Proxy reads data directly from the attach disk, processes it and sends it to the target
      • Upon completion, Backup server sends commands to remove disks from the backup proxy and delete any outstanding snapshots from the source VM.
    • Requirements for Virtual Appliance Mode are…
      • Backup Proxy must be a VM
      • ESXi host running the proxy must have access to the datastore hosting the disks of the source VMs
      • Backup Server and Proxy must have latest version of VMware Tools installed.
  • Network Mode
    • Network mode essentially uses the LAN to transfer your backups, thus making it one of the least desirable transport modes, especially when dealing with 1GB links.
    • Supports any type of storage and is very easy to set up.
    • Leverages ESXi Management interface which can be terribly slow, especially on older version of vSphere.
    • The process of network mode is as follows…
      • Backup Proxy sends the request to the ESXi host to locate the VM on the datastore.
      • Host locates VM.
      • Data is copied from the production storage and sent to the backup proxy over the LAN using Network Block Device protocol (NBD).
      • Proxy processes the data and finally sends it to the target.

Hyper-V Backup Modes

If we are backing up a Hyper-V environment with VBR then our backup proxies are setup a little differently than that of VMware.  Basically we have a couple of different Backup Modes within VBR support for Hyper-V

  • On-Host Backup Mode
    • Easy to use, supported out of the box.
    • Good for a small infrastructure
    • May impact production host CPU usage as well as provide a bit of overhead network wise.
  • Off-Host Backup Mode
    • Very fast
    • Has no impact on production CPU or network usage.
    • Requires an extra physical machine.
    • If backing up a Hyper-V cluster with CSV, off host proxy must NOT be a part of the Hyper-V cluster as CSV does not support duplicate LUN signatures
    • Requirements of an Off-Host Backup Proxy are
      • Must be a physical Windows 2008 R2 or higher server with the Hyper-V role enabled.
      • Must have access to the shared storage where the VMs are hosted
      • A VSS Hardware provider supporting transportable shadow copies must be installed on both the proxy and the Hyper-V host running the source VM.  This is distributed by storage vendors with their client component packages.

Testable tidbits about Backup Proxys

  • In terms of sizing, you should allocate 1CPU for each task you’d l8ike the proxy to process
  • If backing up a Hyper-V cluster utilizing CSV, ensure proxy is not part of the cluster.
  • Off host backup proxies are limited to ONLY PHYSICAL MACHINES
  • Direct SAN Limitations
    • No VSAN support
    • No VVOL support
    • In the case of replication, it’s only used ON THE TARGET SIDE during the first full replication of the VM, subsequent jobs will use hot-add or network.  Source can use Direct SAN for every run of the job.
    • Can only restore thick VM disks
  • Direct NFS will not work for VMs containing snapshots, thus, it can only be used on the target side for the first run of a replication job.
  • Direct NFS will not work with VMware Tools Quiescence.
  • Virtual Appliance Mode Limitations
    • IDE disk are not supported.
    • SATA disks only supported on vSphere 6.0 or newer.
    • vSphere 5.1 or earlier – VM disk size cannot exceed 1.98Tb

Let’s leave this post here for now – we will learn more about proxies and how they are configured in a future module, but the next post will continue on with the VBR core components and talk about Backup Repositories.

VMCE v9 Study Guide – Module 1 & 2 – Overview of products & Key Concepts

downloadAs I prepare to write my VMCE on version 9 I figured I would try and document some of the material here that I’ve been studying – This is the same approach I took when I wrote my VCP’s and VCAP exam and it seemed to work, I passed them – and hey, if I can help to provide a little material for someone else to look over when they are studying for their certification then that’s ok too!

One thing, I can’t vouch for all of this being 100% accurate – I have a lot to cover in a very short time period so I’m scrambling to get through all of the modules as fast as I can – so if you see errors, please let me know and we can get them fixed up…  Also you may find that I have certain sections of information in more than one spot, or information that doesn’t quite fit into the module I’m not – just ignore it – I’ll try and clean everything up as best I can after I’ve completed the certification.

Another thing, Veeam doesn’t necessarily have a blueprint for the VMCE – there is a lot of great information here, but nowhere is there an official blueprint!  So I’m going to follow along with the course description of the VMCE v9 class and see if that works.  Honestly, blueprints are nice to help organize things, but for the most part, you just need to know everything!

So lets start…

Module 1 – Introduction

Hi, I’m Mike from Canada eh!

And on to Module 2.

Module 2 basically just covers off a brief overview of the products Veeam has to offer, as well as provides a definition for some key industry concepts that Veeam adheres to.  How much of this will be on the exam – no idea, but it isn’t that hard of material – But, in efforts to have completeness in this study guide let’s go over it anyways.

Overview of Products

Veeam has quite a few products under its portfolio and seem to be constantly adding more and more.  Below is a brief overview of what Veeam has to offer on the market today

  • Veeam Backup & Replication
    • Agentless backup supporting both VMware and Hyper-V
    • Provided in 4 different editions; Free, Standard, Enterprise, & Enterprise Plus
  • Veeam ONE
    • Real-time monitoring, reporting, and capacity planning.
    • Reports on both VMware and Hyper-V environments as well as monitors and reports on your Veeam Backup & Replication environment.
    • Provided in 2 different editions; Free and Paid
  • Veeam Availability Suite
    • A bundled piece of software that includes both Veeam Backup & Replication as well as Veeam ONE.
    • Provided three different editions; Standard, Enterprise, Enterprise Plus.
    • Each edition includes the paid version of Veeam ONE and the corresponding edition of Veeam Backup & Replication.
  • Veeam Backup Essentials
    • Delivers the same functionality as Veeam Availability Suite, but targeted at SMBs
    • Can be licensed to a maximum of 6 sockets, in two socket increments.
    • Provided in 3 different editions; Standard, Enterprise, & Enterprise Plus.
  • Veeam Management Pack for System Center
    • Provides visibility and protection into both Hyper-V and VMware VMs
    • Extension of Microsoft Systems Center.
    • Ability to monitor VMs, hosts, hardware, storage, network resources as well as your Veeam Backup & Replication environment.
    • Provided in 3 different editions; Free, Enterprise & Enterprise Plus
  • vSphere Web Client Plug-in
    • Provides an overview of your backup infrastructure status/trends and the ability to identify unprotected VMs directly from within the vSphere Web Client.
    • Create ad-hoc VM restore points without leaving the vSphere Web Client.
  • Veeam Backup & Replication Add-on for Kaseya/Plug-in for LabTech
    • allows you to analyze protected and unprotected VMs from within the products respective web interface/control center.
    • Manage, View, and monitor your Veeam Backup jobs
    • Generate reports based on Veeam Backup & Replication data.
  • Veeam Endpoint Backup FREE
    • Allows you to backup and restore Windows-based endpoints (desktops, laptops, and servers).
    • Integrates with Veeam Backup & Replication allowing you to utilize VBR Repositories to store your data.

Key Concepts

Within the Veeam Availability Suite and other Veeam products there are several key concepts that are discussed.  Veeam has came up with 5 key components that provide an overview of all of their products, discussed below

  • High-Speed Recovery – very fast recovery of your applications, VMs, and files.  Backed by several Veeam technologies such as Instant VM Recovery, Instant File-Level Recovery and the Veeam Explorers (Exchange, Active Directory, SharePoint, Oracle, SQL, and Storage Snapshots)
  • Data Loss Avoidance –  Near continuous protection and streamlined recovery processes in the event of a disaster.  Backed by products and technology such as Veeam Cloud Connect, Native tape support, Quick Backup, Scale-Out Backup Repositories, Guest Interaction Proxies, Deduplicating storage integrations, backup from storage snapshots and Built-In WAN accelerations.
  • Verified Recover-ability – Guaranteed recovery of your data.  Backed by technologies like SureBackup and SureReplica
  • Leveraged Data – Provide a production like environment to leverage backup data to its fullest potential.  Backed by technologies such as Virtual Lab.
  • Complete Visibility – Proactive monitoring for your virtualized and backup environment.  Backed by technology such as Veeam ONE, Standalone Console, Veeam Enterprise Manager and vCloud Director support.

Aside from the Veeam created concepts there are a couple of other important items and its best to know how Veeam defines them.

  • Recovery Time Objective (RTO) – The amount of time within which a system must be recovered after an outage.
  • Recover Point Objective (RPO) – The point in time to which a systems data must be recovered after an outage.
  • Recover Time Point Objective (RTPO) – Veeams definition of how they can provide both a low RTO and a low RPO at the same time.

That’s it for Module 2!  Stay tuned for Module 3 where we will begin to discuss some of the core components of Veeam Backup & Replication as well go through the deployment scenarios and upgrades!

Nakivo – Backup and Replication for your VMs – A review!

nakivoLet’s face it – backup software is not the most exciting thing for a CIO in today’s world.  I mean, 99% of the time it sits idle, backing things up, spewing out reports – for the most part its somewhat of money sinkhole in an environment – but when push comes to shove and someone has deleted that important email, or that mission critical server fails – when a recovery or restore option takes place a piece of backup software can make or break a business!   Whether you are a simple SMB or a large enterprise backup could almost be classified as one of the most important things to your organization – so it has to be easy, intuitive, and reliable!   Nakivo, with their flagship Backup & Replication has taken that exact approach when developing their software!  Nakivo, headquartered in the infamous Silicon Valley was founded just in 2012 and after 4 fast-moving years have just released version 6.1 of their product.  This is one piece of software I have been hearing a lot about, but never had the chance to check out.  With that said I grabbed an NFR key from them and put it in the lab – and here are my thoughts.

Disclaimer: This review is sponsored, meaning I did receive compensation in some sort of form for writing this! That said, as always, any review I post on my site is solely my words and my opinion and in no way was modified or changed by the vendor!

 

Architecture

Before we dive directly into the installation its best to first explain a little around Nakivo’s architecture.  Nakivo is really broken down into three main components; a Director, a Transporter, and a Backup Repository.

image2014-12-25 7-35-33

The Director

We can think of the Director as somewhat of a management plane for Nakivo – providing the user interface we log into and maintaining lists of our virtual infrastructure.  It also handles the creation, configuration, and scheduling of our backup job environment.  We only need one instance of the Director as it can handle multiple vCenters and standalone ESXi hosts.

The Transporter

The next component, the Transporter is our heavy lifter.  The Transporter is the data mover per say, performing all of the backup, replication and recovery operations as it receives its respective instructions from the Director.  The transporter also handles features such as compression, encryption and deduplication.  When we install a Director we will automatically get one “Onboard Transporter” installed on the same machine by default which cannot be removed.  That said, as we find we are processing simultaneous VMs and processes at once we can scale our backup environment by adding additional standalone transporters to help with the lifting!  As we do so, we also get network acceleration and encryption between transporters as data is passed back and forth.  Finally we have the Backup Repository.

The Backup Repository

This one is pretty self explanatory in the backup world.  It’s a container or pool of storage to hold our backups.  This can be a CIFS share or simply any local folder or storage attached to a transporter.  Again, when we initially install our Director we also get an “Onboard Backup Repository” to use by default.

Installation

imageAlright, with a little background knowledge behind us it’s time to get Nakivo deployed and wow, talk about some options!!!!  Deploying Nakivo Backup & Replication should satisfy just about every environment out there!  If you are primarily a Windows shop, simply use the Windows installer – Does your environment mainly consist of Linux-based distributions – hey, simply install the Linux package!  Or, do you prefer the ease of simply deploying appliances – they have you covered there as well with ova based virtual appliances!    Keep in mind that it doesn’t matter which installation method you chose – in the end you are left with the same product.  For the sake of this review I’ve chosen what I think might be the most common installation method – the Windows-based install.

So on with the install!  I’ve chosen the “Full solution” option as my installation type – meaning I will get an all-in-one install of a Director, Transporter and Backup Repository on the same machine!  Certainly this might not be ideal for a production environment, but suffices in the case of my lab.  As you can also see , the first screen allows me to specify where exactly I’d like to create the repository as well.

One click later…

image

Wait what!?!?!  Yeah – one click!  One click and we are done with the Windows installation of Nakivo Backup & Replication! As for the other installation types they are just as easy – Linux requires the execution of a single command, and we all know how simple deploying a virtual appliance is!  If you are looking to protect an Amazon instance, a simple link to a deployable AMI is provided as well!

Configuration

Time to start configuration the product now!  Just a note, I really dig the earth/space image that is displayed by default in the UI.  It’s kind of a nice break from the standard box type login screens you see in most products.

image

Upon first launching Nakivo you will be prompted to set up a username and password.   After doing so you will be brought into their Configuration wizard and as you can see below only they only require three types of information; Inventory, Transporters and Repositories – This wizard, along with many others within Nakivo are short and to the point – and clearly make sense in the simplest terms – think, What to back up, how to move it, and where to put it – easy right?

image

As far as Inventory and VMware goes we just need to point Nakivo to our vCenter Server and provide it some proper credentials – from there the product goes out and discovers our inventory and allows us to add it into Nakivo Backup & Replication.

image

The Transporter section allows us to add/import any existing Transporters we may have already installed in our environment – be them on vSphere or Amazon AWS if we chose to do so.  As we mentioned earlier this review will simply use the “Onboard transporter” that is installed by default.

image

Lastly we can set up any Backup Repositories we want to have within our backup environment – again, I’m sticking with the default “Onboard repository” we setup during the installation, but if need be we can create new or import existing repositories into Nakivo during this step.

Once we are done we are brought into the Nakivo Management UI where we can begin creating jobs and backing up our environment – but before we go to far there are some other configurable options we change that weren’t included in the initial bare-bones wizard.

image

I’m not going to go through all of the configurable options but I’ll highlight a few common settings normally setup within environments as well as some very “nice to have’s” that Nakivo includes…

  • General->Email Settings – here we set up our SMTP options in order to have Nakivo send out alerts and reports.
  • General->Branding Settings – as mentioned earlier we have complete control over modifying the look and feel of Nakivo, uploading our own logos and backgrounds as well as support and contact information
  • General->System Settings – This allows us to specify how long we store job history and system events, as well as setup any regional options we prefer such as week start days, etc.
  • Inventory – Here we can add multiple vCenter/ESXi hosts as well as AWS environments
  • Transports/Repositories – Again, this is where we can manage or add any new transports or repositories to the system.
  • Licensing – Handles the changing of licenses for the product.

 

So on to the job setup

imageNow that we have Nakivo configured it’s time to start creating some jobs and see just how the product performs.  From the main dashboard we can do this by simply clicking the Create button.  As you can see to the left we have a variety of different jobs we can create, and depending on what you have set up within your inventory some may be unavailable to us.  For instance I don’t have an Amazon account attached to my instance of Nakivo so I’m unable to create a job to back up or replicate EC2 VMs.  That said we did add our vCenter into our Inventory so let’s go ahead and select ‘VMware vSphere backup job’ to get started…

image

As you can see above, the vSphere backup job creation is again in a wizard type format, firstly requiring us to select just what VMs we would like to process with this job.  We do this by either browsing through the inventory presented, or filtering with the search box provided, then checking the box next to our VMs we’d like to back up.  We can also select parent objects here as well, such a host, cluster, or vCenter, which would in turn backup all VMs residing within the parent.  This is useful in the event you want to capture and newly created VMs in the environment without having to modify existing jobs every time.  If selecting multiple VMs during this stage you can drag them around within the right hand pane in order to set priority preferences for processing – ensuring certain VMs are backed up before others, for now I’ve selected just my Scoreboard VM.

image

The second step deals with repository selection – we’ve already selected what we want to back up, now it’s time to say where to back up to.  Selecting ‘Advanced’ and expanding out our VMs we can see that we can globally select a repository for the job, yet perform overrides as well on a per-vm per-disk basis –giving us the granularity to place certain VM disks on certain repositories if we chose to do so.

image

Thirdly we setup the job schedule, with shortcuts to all days, workdays, weekends etc which can change depending on our regional settings we have setup within the system.

image

Lastly we setup our job options.  It is here where we give the job a name, select our retention cycles for the job and execute any pre/post job scripts we might want to kick off – all of the standard features you expect from a backup solution – but there are some additional options available here as well we should have a look at…

  • App-aware mode – instructs VMware tools to quiese the VM before backing up, allowing applications to ensure they are in a consistent state.
  • Change Tracking – This is a common feature provided by VMware that allows backup application to process just those blocks that have changed since previous backups, speeding up the time it takes to create an incremental backup.  Here we can select to use either the VMware version (preferred) or Nakivo’s proprietary version (available if no other CBT exists).
  • Network Acceleration –  if backing up over a WAN or slow LAN links this option will leverage compression and other reduction techniques to speed up data transfer
  • Encryption – this option will encrypt data that flows between transports.  Since we have only one transporter, this option is not available to us.
  • Screenshot Verification – This option will use a Nakivo technology called Flash VM Boot (we will cover this later) that will automatically recover our backups in an isolated manner and take a screenshot of the VM for inclusion in the Job Reports and notifications.
  • Recovery Points – here we can specify how many daily, weekly, monthly, and yearly recovery points we would like to maintain.
  • Data Transfer – Allows us to specify how Nakivo gets to the source data (Hot Add – mounts VM disks to the transporters, SAN – retrieves data directly from a FC or iSCSI SAN lun, or LAN – network access to the data).  We can also specify which transporters we would like to use for the job here if we had multiple transporters on different networks, clusters, etc.

 

image

After clicking ‘Finish’ we can now see that our ‘Run Job’ tab in the dashboard is active and displays our newly created job.    As we can see above our new job is indeed running, with the status being updated in the Job Info section of the dashboard.  I really like the way Nakivo has displayed this data.  We can see everything we need to know about any given job, as well as it’s run status, resource usage on any transporters its utilizing and the events and job status on all on dashboard.  When the initialization of the job is complete, the UI switches to different view showing the speed and data transferred – A very intuitive design for a UI.  The only thing I’d love to see here is the ability to break this information out into another window without having to open a new tab.

But it’s Nakivo Backup AND REPLICATION

Now that we have successfully backed up our Scoreboard VM its time to have a look at replication.  The process for creating a replication job is similar to that of a backup, simply click ‘Create’, and select ‘VMware vSphere Replication Job’.  Again, we are presented with a similar 4 step wizard.  Step 1 we select which VMs we wish to replicate, again with the options of selecting parent containers.

image

Step 2, as shown above, presents us with some different options than that of a backup.  Since we are replicating VMs, they will be stored in their native VMware format, therefore, instead of selecting a repository as a target we need to select another ESXi host.  As you can see above I’ve simply selected to replicate my ConcessionStandPOS VM from it’s location in Montreal, to another ESXi host located in Brossard for DR purposes.  Again, Step 3 allows us to create a schedule for the replication to occur with the exact same options as that of a backup job.

image

Step 4, shown above, is similar to that of the backup job options with a few added options.  We still have the ability to select our transporters and transport mode, as well as set recovery point retention settings, and finally perform the Screenshot verification as well,  however we have a few new options to configure outlined below

  • Replica Names – append/prepend a string to our VM name for the replica, or allows us to specify individual names on a per-vm basis
  • Replica Disks – Allows to specify to maintain consistency in terms of disk type for the replica, or specify that replicas are only stored thin-provisioned.

Once we click Finish again we will see our newly created job on our dashboard.  One item of interest here is that by default Nakivo doesn’t group our jobs, meaning backup and replication jobs are intermixed together.  They are distinguishable by the small icon next to them but if you want to further distinguish between the two visually we can click ‘Create’ and then ‘Job Group’.  This essentially creates a folder that we can drag and drop our jobs in and out of, allowing us to create a Backup Job Group and a Replication Job Group.  Job Groups also allow us to perform bulk operations on all jobs within that group, such as starting, stopping, disabling and enabling, etc…

When it really matters…

imageWe can do all of the backing up and replicating we want, but when push comes to shove we all know that it’s the recovery that matters most!  All recovery within Nakivo is done on the ‘Recover’ menu in the main dashboard.  As you can see to the left we have a variety of options when it comes to recovery in Nakivo, with each explained below…

Individual Files

This allows us to recover individual files from a VM backup within Nakivo.  After selecting our backup and then a desired restore point or point in time to restore to, Nakivo will mount the deduplicated, compressed backup file to its Director interface.  In the end we are presented with file browse dialog box, allowing us to select individual files, folders, partitions, and drives.   From there we have the options of either downloading these files directly to our Nakivo server, or whatever client you happen to be running the Nakivo UI on, or forwarding them via email.

Microsoft Active Directory Objects

user_iconActive Directory objects are treated somewhat the same as a file level recovery.  The backups are mounted in their compressed and deduplicated state to the Nakivo server.  From there you can browse or search for individual objects and recover them directly to your client machine.  The AD objects are downloaded in a .LDIF format, which allows for easy importing directly back into Active Directory.

Microsoft Exchange Objects

emailSimilar to that of Active Directory objects Nakivo can restore Microsoft Exchange items as well.  With this, we have the ability to search for and recover items such as emails, folders, mailboxes, etc.  The items are downloaded to the client machine, or alternatively forwarded via email to an address of your choosing.

VMs from backup

2185EN_01_02If you need to restore an entire VM this is the option you would most likely chose.  Nakivo allows you to restore a complete VM from a backup file – at which point it extracts the data from the deduplicated, compressed backup file and re-registers a VM on a host of your chosing, either preserving the VMs UUID, or creating a new one.  Just as in replication, we are able to restore the VM to it’s original disk type, or force it to be thin-provisioned.  We can also specify whether we would like our recovered VMs powered on, and whether or not we would like to change or preserve the MAC address on the recovered VM.

VMs from replica

vm-replicaFailing over to a replica within Nakivo is a very easy process.  Essentially you simply select which VM you would like to fail-over, select a time frame in which you want to fail-over to and run the job – after that, Nakivo simply places the replica in the correct point in time snapshot and powers it on.  When completed you are left with an absolute copy of your VM, recovered almost immediately.

Flash VM Boot

image2015-1-15 9-52-14Flash VM boot is a technology that allows us to power on our VM backups directly from their compressed and deduplicated state.  Rather than taking the time to restore the data as we did in the ‘VMs from backup’ scenario we can simply boot a VM directly from its backup files.  Nakivo does this by first creating a new VM on a target ESXi host, then exposing the VMs disks within the backup as iSCSI targets, and mounting them directly to the newly created VM as a virtual RDM.  Before any mounting though a snapshot is created, which will essentially redirect any changes that may take place during the flash VM boot, providing a means of discarding them later in order to preserve the integrity of the backups.  This is the technology that enables the ‘Screenshot verification’ options within the backup jobs, allowing us to ensure that our backups will indeed boot up when it really matters.   Once the VMs have booted you can permanently recover them by utilizing VMware Storage vMotion to move and migrate the RDMs to vmdks, or, subsequently if you aren’t licensed for svmotion you can create a new replication job within Nakivo to replicate the VM to another host.

So whats the verdict?

Nakivo is certainly a very easy product to use and get used to – having the management interface run through a web browser is certainly an advantage – being able to launch to management interface from any workstation without installing a client!  Also, the UI is very intuitive and very clean, which is surprising because they cram a lot of information into those screens – but everything is super easy to find.  Creating backup and replication jobs is a breeze, simply launching 4 step wizards from start to finish!  As for performance I can’t complain either, all of my jobs finished in a timely manner, mind you my test VMs are quite small with very little change rate, but needless to say performance was fine.  Nakivo is architected in a way that is simple to get up and running very quickly, yet, also simple to scale with a growing environment by adding more transporters and repositories.  I really like the options you have when deploying Nakivo, be it physical or virtual or cloud, Windows, Linux, virtual appliance or even on a NAS such as Synology– Nakivo leaves the choice to you!   The deduplication technology is outstanding – and coupled with the compression they offer you can be sure that you are using as little capacity as needed and not storing redundant data or wasting space.  I would however like to see the product expanded in the future to include a couple of features that I couldn’t find – Firstly it would be nice to see Nakivo bake in the ability to restore individual files and application items directly back into their source VMs without having to download locally, as well, even though I don’t use it, Hyper-V support seems to always come last on backup vendors lists – hopefully we see this supported sometime soon as well.  I should mention that even though this review focused solely on VMware, Nakivo is fully supported to protect your instances an Amazon as well – giving you a feature rich backup and replication options to move data between regions without utilizing snapshots. Also, there are a slew of multi-tenancy options that I didn’t have time to explore, as well as the ability to perform copies of your backups offsite or to the cloud.   As far as licencing goes Nakivo is licensed on a per-socket basis, and honestly, starting at $199/socket for VMware, and $49/month for AWS you are going to be hard up to find a product with all of these features at a lower price point!

With all this said would I recommend Nakivo – certainly!   It’s easy, intuitive, it performs and its priced right!  But as always, don’t necessarily take my word for it!  If you want to try out Nakivo for yourself you can – If you are  VMUG member, vExpert, VCP, VSP, VTSP, or VCI you can get your hands on a free, full-featured two-socket NFR key yourself!  Nakivo also offers a full featured trial edition for 14 days to try the product out!  Still not enough for you?  Nakivo has a free edition – you can back up 2 VMs, performing all of the features above, for free, forever!  Again – options!!  An no excuse not to try it out!


Want to learn more about Nakivo

Check out some of these great resources!

As well as some other great community reviews of Nakivo