Tag Archives: Veeam

Veeams willingness and responsiveness to change may be their biggest asset.

Let’s set the stage for this article, shall we?

At VMworld 2017 this year I had the chance to attend Veeams’ Press Breakfast.  Now, I’ve been to press events before and very seldom have I ever wrote an article about them – I mean, they aren’t a technical deep dive into any technology – sometimes there is embargo pre-release information given out, but really the goal is to lay out your companies broader vision and scope for the press and analysts in attendance.

So why an article this time?

It’s not because the eggs were delicious or the coffee was strong – also, it’s not due to any peeks into the future or pre-release information as there was none handed out.  Honestly – the very first slide of co-CEO Peter McKay’s presentation is what stood out for me.   That slide contained the exact same image as shown below.

The gist – it’s not the strongest that survives, nor the smartest – but the one that can best adapt to change around them.  This got the wheels in my head turning a bit – how could this quote, a quote written by a man over 200 years ago – how could this possibly apply to a tech company in today’s world?  Which kind of led to me to a final thought…

How has Veeam changed over the years?

I don’t recall the exact day I started using Veeam Backup & Replication.  It was sometime around the v3 days – but what I can remember is some of their messaging over the years – and how that has changed…

#1 for VMware

This one phrase resonated with me – at the time anyway!  Most organizations in the 2008-2010 era were heavily focused on deploying VMware environments and virtualizing their workloads.  We quickly realized that we needed some way to protect these VMs –  something that was different from our traditional backup methods – which brings us to the next era of Veeam messaging

Purpose Built for Virtualization

We saw an influx of Hyper-V environments being built as the industry was finally getting to a point where Hyper-V was “good enough”.  Therefore, we saw Veeam add support for the new Hypervisor and change their messaging to Purpose Built!  We needed something different from our legacy backup software to work within our virtualization environments – and Veeam was delivering on this with software designed around virtualization.

Availability for the Modern Datacenter

With the influx of cloud services occurring over the next little while we once again saw Veeam adapt – adding the ability to offload backup copies to the cloud as well as their own service providers via Cloud Connect.  We see the “purpose-built for virtualization” messaging dwindling away as Veeam began to release products which handled physical endpoint backups for both Windows and Linux.   Veeam was turning more and more into an availability company – moving away from the data protection business.

Availability for the Always-On Enterprise

And here we are today – Veeam has changed in a lot of ways and added a slew of new products and functionality to adapt to the way data centers are operated.   Today Veeam is aiming to provide availability to our data no matter where it lives – in a hypervisor, in the cloud, via SaaS applications – Veeam is trying to be the end all be all when it comes to providing availability of the data!

So yeah, we have definitely seen growth within Veeam as a company over the last decade!  Everyone notices that – but hopefully this helps shed some light on how the organization itself has had to adapt to change!  In technology, nothing is static – we are always seeing new bright and shiny things – and as Charles said 200 years ago, those who can adapt to change, or in today’s terms, respond to how the market is fluctuating in front of them will be the ones who survive!   Call it agility, responsiveness, innovation, whatever – in the end it all comes down to how a company responds – and this is exactly what companies like Veeam need to do to not just stay ahead in today’s world – but stay afloat!

So yeah, a post about a press breakfast!  Thanks for reading!

Consuming the Veeam REST API in PowerShell – Part 1 – Starting a job

Since the version 7 release of Veeam Backup & Replication all of the typical Enterprise Manager functionality has been exposed via an XML REST API.  Being a pretty hefty user of vRealize Orchestrator this has proven to be extremely useful to me when looking to automate certain parts of my infrastructure.  Now that said, there are times when vRO is simply out of reach – or when the person I’m creating the automation for is simply more familiar with PowerShell.  Now, I understand that Veeam Backup & Replication does come with PowerShell support and what I’m about to walk through may be somewhat redundant as they have their own cmdlets built around certain tasks – but this crazy IT world we live in is changing and REST seems to be at the helm of that.  We are seeing more and more vendors first creating a REST API and then consuming that themselves in order to provide customers with a GUI front-end.

So, in the spirit of learning how to work with the Veeam REST API I decided I’d take the time to document out how to perform some of the sample functions within their API reference using nothing but PowerShell.   This first post, will deal solely with how to start an existing Veeam Backup & Replication job.   Keep in mind the shear nature of REST is that although the bodies and headers may change, the process of consuming it is relatively the same no matter what the application – so there is some valid learning to be had regardless of the end product.

PowerShell and interacting with REST.

Before jumping right into Veeam specifics we should first discuss a few things around the PowerShell cmdlet we will need to use – as well as specifics around the Veeam Enterprise Manager REST API itself.  REST APIs are nothing more than simple http requests sent to an endpoint – meaning they are consumed by simply sending a request, be it a GET, PUT, POST, etc. – whatever the API supports to a uri.  From there, the API takes a look at what was passed and returns back what it normally would with an http request – a header, a status code, and a body – Its this response that we need to parse in order to discover any details or information pertaining to our request – it lets us know whether or not the operation was successful, and passes back and valid data as it relates to the request.  Now, in Veeams case they use an XML based API for Enterprise Manager.  This means we can expect to see the response body in an xml format – and, if at all we need to create a body to pass to the request, we would need to first form that body in an xml format before we sent it!  Now all of this sounds kind of difficult – but in the end it really isn’t – and you will see that as we create our first script!  Really, there are two key PowerShell specifics we are using….

  • Invoke-WebRequest – this is the cmdlet in which we use to send the API call, passing a uri, method, and sometimes a header
  • XML – this is a simple way to take our response and label/cast it as xml in order to more easily parse and retrieve the desired information from it

So with that said, let’s get scripting…

First Step – Get a SessionId

The first step in any API consumption is usually authentication – and aside from the scopes and methods themselves this is normally where we see the most discrepancies between vendors.  With Veeam we simply send a POST request to the sessionMngr resource type and retrieve a sessionId.  It’s this sessionId which will then need to be included within the header of all subsequent requests to the API – this is how we are identified and authenticated.   Now you could send a get request to the root of the API scope and parse through all of the returned content to find a specific versions uri if you wanted – but I happen to know that we can simply use ?v=latest within Veeam to always use the latest and greatest version.  So let’s go ahead and authenticate against the API and retrieve our sessionId with the following code

$response = Invoke-WebRequest –Uri “http://localhost:9399/api/sessionMngr/?v=latest" -Method "POST" -Credential (Get-Credential)
$sessionId = $response.Headers["X-RestSvcSessionId"]

ps1[8]

Looking at the code above we are basically doing a couple of things – first,  we issue our request to the http://localhost:9399/api/sessionMngr/?v=latest to our uri, and also have the system prompt us for credentials as this will be performing the  actual authentication.  And lastly, we parse our returned Headers in the response in order to grab our sessionId.  So if all goes well, you should be left with a string in a similar format to the one shown below stored in our sessionId variable – and now we are authenticated and ready to start requesting…

Now let’s start that job!

So the first example in the REST API Reference is starting a specific job – to do this we first need to get the uri for the jobs resource.  Now we could go ahead and simply look this up in the reference guide as it has all the information (***hint*** its http://localhost:9399/api/jobs) – but where’s the fun in that?  The response we have just received from logging in has all of the information we need to grab the uri programmatically – and, should things ever change we won’t have to rewrite our code if we grab it from the response.  So, to get the proper uri we can use the following one-liner to parse our content as xml and find the correct child node…

$uri = (([xml]$response.Content).LogonSession.Links.Link | where-object {$_.Type -eq 'JobReferenceList' }).Href

Now that we have the proper uri we can go ahead and make a GET request to it to return a list of jobs within Enterprise Manager.  But, remember we have to pass that sessionId through the request header as well – so in order to do this we issue the following commands…

$response = Invoke-WebRequest -Uri $uri -Method "GET" -Headers @{"X-RestSvcSessionId" = $sessionId}

Again, our $response.Content will contain a lot of information, including all of our job names and subsequent metadata with them.  So, in order to find the proper uri for my job (Backup Scoreboard) I can  use the following command to once again retrieve the uri for our next call.

$uri = (([xml]$response.Content).EntityReferences.Ref.Links.Link | Where-object {$_.Name -eq 'Backup Scoreboard'}).Href

Once we have that – we again send a GET request to the new uri

$response = Invoke-WebRequest -Uri $uri -Method "GET" -Headers @{"X-RestSvcSessionId" = $sessionId}

Again, we get a lot of information when looking at our $response.Content – but let me somewhat format it for you below so we can see what we have…

ps1[10]

As you can see we have a few different Href’s available to grab this time – each relating to a different action that can be taken on our job.  In our case we are looking at simply starting the job so let’s go ahead and grab that uri with the following command…

$uri = (([xml]$response.Content).Job.Links.Link | Where-object {$_.Rel -eq 'Start'}).Href

And finally, to kick the job off we send, this time a POST request, using the uri we just grabbed…

$response = Invoke-WebRequest -Uri $uri -Method "POST" -Headers @{"X-RestSvcSessionId" = $sessionId}

Now if everything has went as intended we should be able to pop over to our VBR Console and see our job running.  Now wasn’t that way easier than right clicking and selecting Start Smile.  One thing I should note is that we can parse this body as well and grab our taskId for the job we just started – from there we can go ahead and query the tasks resource to figure out its’ status, result, etc..  For those that learn better by simply seeing the complete script I’ve included it below (and in fairness running this script is faster than right-clicking and selecting ‘Start’).  In our next go at PowerShell and the Veeam API we will take a look at how we can instantiate a restore – so keep watching for that…  Thanks for reading!

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
$backupjobname = "Backup Scoreboard"
#Log in to server
$response = Invoke-WebRequest –Uri “http://localhost:9399/api/sessionMngr/?v=latest" -Method "POST" -Credential (Get-Credential)
#Get Session Id
$sessionId = $response.Headers["X-RestSvcSessionId"]
# Get Job Reference link
$uri = (([xml]$response.Content).LogonSession.Links.Link | where-object {$_.Type -eq 'JobReferenceList' }).Href
# List jobs
$response = Invoke-WebRequest -Uri $uri -Method "GET" -Headers @{"X-RestSvcSessionId" = $sessionId}
# get specific job from list
$uri = (([xml]$response.Content).EntityReferences.Ref.Links.Link | Where-object {$_.Name -eq $backupjobname }).Href
#get job actions
$response = Invoke-WebRequest -Uri $uri -Method "GET" -Headers @{"X-RestSvcSessionId" = $sessionId}
#get start action
$uri = (([xml]$response.Content).Job.Links.Link | Where-object {$_.Rel -eq 'Start'}).Href
#Start job
$response = Invoke-WebRequest -Uri $uri -Method "POST" -Headers @{"X-RestSvcSessionId" = $sessionId}

The StarWind Cloud VTL for AWS and Veeam

When companies are approaching a data protection strategy something dubbed the “3-2-1 rule” often comes up in conversation.  In its essence, the 3-2-1 rule is designed as a process to ensure that  you always have data availability should you need it.  That’s 3 copies of your data, on 2 different media types/sets,  with one being located offsite.  Now, when looking at taking this rule and applying it to our data protection design the subject of tape is usually discussed as it facilitates that second type of media we need to satisfy the “2” portion.  Tape has had a  play in  data protection for a long time but the mundane tasks of removing a tape and inserting another just doesn’t fit well inside of our modern datacenters.  When its time to restore we are then left with the frustration of finding the proper tapes and then the slow performance of moving data off of that tape back into production.  That’s why companies like StarWind initially built what is called the Virtual Tape Library (VTL).   The StarWind VTL mimics that of a physical tape library, however instead of requiring the manual intervention of removing and loading tapes it simply writes the data to disk.

The StarWind VTL is nothing new – in fact its’ been around since 2009.  But just  this past month at VeeamON StarWind announced yet another version of their  VTL, only this time, instead of just writing the data to local disk they now have  the option additionally sync those virtual tapes to the cloud.  The software, called StarWind Cloud VTL for AWS and Veeam couldn’t come at a more opportune time as only a week before the announcement “WannaCry” was worming its way through Europe, encrypting both production  and  backup data – leaving those companies without some sort of offsite air-gapped backup without a  whole lot of option.

StarWind-Overview

So how does it work?

The StarWind Cloud VTL for AWS and Veeam is 100% software based – therefore no extra hardware or appliances are need to be racked and stacked in your datacenter at all.  In fact, for convenience and cost reasons StarWind Cloud VTL can even be installed  directly along side with your Veeam Backup and Replication Backup server.  If you have ever installed any other StarWind products then the Cloud VTL setup will look very similar, utilizing a very easy to use wizard type installation.

Once installed, the configuration (as shown below) is really just adding our virtual tape device (drive) and however many number of virtual tapes we want.  As we can see, StarWind actually mimics the HPE MSL8096 Tape Library – therefore we may need to pull down any appropriate device drivers in order to support it.  Once installed we are left essentially with an iSCSI target that points to our VTL, which in tern maps to local disk.

Library-Location StarWind-HPEMimic

Cloud-Replication So by now you might be thinking “Hey, these tapes are mapped to disk not cloud” and you  are absolutely correct in that thought.  StarWind Cloud VTL implements what they call a Disk to Disk to Cloud process – meaning data is first copied to disk (StarWind) from disk (Production) and then further replicated to Cloud (Amazon S3/Glacier).  This scenario allows for the actual Veeam tape job to complete much faster as it’s simply streaming to local disk- after which, the data is replicated to Amazon.  To set this up we simply need to click on the ‘Cloud Replication’ option (shown right) within the StarWind management console and provide our access and region information for our S3 bucket.

Above I hinted at yet  another feature of the StarWind  Cloud VTL with the mention of Glacier.  As shown below we can see a few options as it pertains to our retention – the most interesting being the ability to migrate our tape data out of S3 and into the cheaper, more archive suitable Glacier service after a certain period of time.  This tiering feature allows us to keep costs down by essentially staging and de-staging our backup data depending on age to a lower tier, lower performance storage while keeping our most  recent restore points on a more reliable, higher performance cloud storage service.

S3-Glacier

We can also see that we have options surrounding when to purge the local on-site tape data as well as how long to  wait after the virtual tape has been ejected locally before we start the replication to S3.

That’s really it as far as the StarWind setup is concerned.  The only thing left to do now is setup our the VTL as a tape server with  VBR.  Now before we can do this we will first need to establish a connection to our VTL.  This, just as is done through  StarWind Virtual SAN is simply just an iSCSI target that is mounted with the standard Windows iSCSI tools.  As mentioned previously, the VTL mimics a HPE MSL8096 so be sure those drivers are downloaded and installed to ensure the VTL can be discovered.

For the VBR  configuration we simply add our StarWind VTL we have setup into our backup infrastructure as a  “Tape Server”.  After doing so we should be able to see all of our virtual tapes that we have created and can simply setup our tape jobs or File to Tape jobs just as we always have within Veeam – only this time, our tapes are basically being  replicated to S3.

In the end I think StarWind is on to something here!  This is their first go at cloud replication and I’m sure there is much more to come.   In fact we have already seen the addition of Microsoft Azure blob storage into the StarWind Cloud VTL portfolio so things are moving quickly.  The idea of still achieving the ultimate goal of the 3-2-1 rule while not having to physically mess around with tape is appealing – not to mention that by utilizing cloud we can get that offsite scalable storage tier without all the need to manage or update or even procure the hardware.  Personally I can see Veeam shops jumping on this.  It certainly makes that ideal environment of having some uber fast backup repository for your most recent backups on-stie and leaving StarWind and AWS with the job of migrating and managing the more “cold” archival type data up to the cloud.  Remember you  don’t want to be “that” IT shop that can’t recover from the next piece of ransomware that comes down the pipe.  If you would like to give StarWind Cloud VTL for Amazon and  Veeam a shot you  can pick yourself up a  free 30 day trial here.

My Veeam VMCE-A Course and Exam Experience

vmce_a_logoFirst up a bit of a back story – as a Veeam Vanguard I was lucky enough to have received the required training last June in order to qualify for my VMCE exam, which I wrote and passed in August of 2016!  A nice little perk of the program if you ask me!  Anyways, earlier this month a handful of us were again lucky to participate in the next level of training, the VMCE-A Design & Optimization course in an online pilot, thus, qualifying us to write the VMCE-A exam.    Under normal circumstances I would take a lot of time to study up and possibly create guides on this blog for the certifications I write – however, with VeeamON right around the corner and the ability to take advantage of a “Free Second Chance” offer for writing certifications on site my normal study strategies didn’t apply.  I couldn’t pass up the chance of at the very least getting a look at the exam, even if it meant failing – hey, a free second chance!

So with the course fresh on my memory I studied where I could between it and my exam appointment at the conference, be it on the car ride to the airport, at 30,000 feet in the air and during a few meals at the conference.  Anyways, the tl;dr version is I passed the exam….barely – getting only 4% over the pass mark of 70%.  Not a mark I’d certainly be super proud of, but in the end a pass is a pass and I’ll take it!

On to the exam!

exam_multiplechoiceThe VMCE-A D&O exam is 40 randomized questions, all multiple choice.  Some questions have only one answer, while some are in the “Select 2, Select3” format.  As mentioned earlier a passing score is 70% or higher.  As far as the content goes I can’t say a lot as NDA’s are in effect, however what I can say is that all questions I received are fully covered within the VMCE-A D&O course material – and in fact, at the end you get a nice little detailed breakdown on how you scored in the different sections covered in the course (Design & Sizing, Infrastructure, Security, Optimization, Automation & Compliance, and Troubleshooting).  This certainly helps you to nail down where you might want to freshen up in order to improve your skill-sets.

One big thing I will say is that this exam is tough!  For as easy as Veeam can be to simply get up and running there is a lot to know about their complete suite of products – and a lot to cover in order to test on all of the features and benefits of just Veeam Backup & Replication.  Now being a customer I’m not designing these Veeam solutions day in and day out, so I focused a lot of my attention on the design section, as well as other parts of VBR that I don’t use that often.  But just as with the VMCE it’s not enough to solely focus on just VBR – Veeam ONE, Cloud Connect, etc – these are all free game for testing on this exam – so if you don’t use them I would certainly recommend brushing up on them.   I can’t stress enough that all of the content I was tested on in the exam is covered within the course materials (textbook/slides) – so pay attention during the course!  I can say, that if you see something labeled as a best practice or a formula you should remember these – Remember, this is an architect exam based on designing Veeam environments!  Just keep that in the back of your mind while studying!

As far as timing goes you have 1 hour (add another 30 minutes if English isn’t your first language) to complete the 40 questions.  I found this to be more than enough time.  Just like VMware’s VCP exams you can flag certain questions for review, and go back and forth between questions on the exam at your leisure.  The strategy I took, since I had no idea how much of a time crunch their might be, was to simply go through the questions answering the ones I knew I was right and flagging any that I was unsure of for review after.  This process took me roughly 30 minutes, which allowed me another 30 minutes to go back and review those questions I didn’t quite have a grasp of.  My review took roughly 10 minutes – after that I went through every question again, double-checking and tallying in my head how many I knew I had right, hoping to come up with a score high enough to make me feel comfortable enough to click that dreadful ‘End Exam’ button.  In the end I knew I was I close, but ended it anyways!

You will get your score immediately after completing the exam – so you know whether it was a pass or fail right away – no painful time spent wondering Smile  Also, as mentioned earlier, upon exiting the facility you will get a print out showing how you scored in each category.  I’m certainly happy I passed and know that I can for sure improve in some areas – maybe another study guide is in the cards for me!

The Veeam Certification Paths

For those that don’t know Veeam currently has 2 different certifications.  The VMCE, which documents proof  that the engineer has the necessary level of knowledge to correctly deploy, configure and administrator Veeam Availability Suite.  Then, the VMCE-A D&O which adds on the knowledge from the VMCE, bringing in more of a design and optimize feel to the test, all the while following the Veeam best practices.  Once you have achieved both the VMCE and the VMCE-A, Veeam accredits you with the title of the Veeam Certified Architect, or VMCA.  The VMCA is not a separate certification and does not require a separate step – it’s simply a designation handed to those whom have completed the requirements for both the VMCE and VMCE-A, along with passed both exams.

VeeamCertificationPath

A little about the course

Honestly, even if you don’t go through with the exam the VMCE-A Design and Optimization course is an awesome course to take.  I guarantee you will get something out of it even if you design on a daily basis.  For me, being a customer and administrator of these products it was an awesome opportunity to walk through the Veeam design methodologies, and deep diving into each step one by one to come out with the full solution.  The course has a couple of design scenarios inside of it, of which there is really no right or wrong answer.  We broke into a couple of different groups to do these and it was amazing to see just how different the end designs were.  The instructors take the opportunity to pick away at these designs, trying to understand your though process and figure out how you think – asking a lot of questions in regards to why you set it up the way you did!  This to me was the biggest advantage of the course – having that interaction and learning other ways to accomplish similar results – and seeing where you might be going astray in your though process.

So with that I hope this helps anyone else who might be on the fence about taking either the course or the exam.  I can proudly say that I am a VMCA now and that feels great (and I’m glad I don’t have to cash in that second chance as it’s a very tough exam – or at least it was to me).

#VeeamON 2017 – Wait! There’s more!

If you packed your bags up and started to shutdown after the Wednesday keynote thinking you have heard the last of the VeeamON announcements then you might want to think about digging out that notebook and sharpening your pencils again as we aren’t done yet here at VeeamON 2017!

Thursday, Paul Matiz took the stage for the final keynote of the show and made some more announcements around existing products – and threw the gauntlet down on one brand new product, the Veeam PN, which I released a separate post about!  That said, Veeam PN wasn’t the only Thursday announcement – there were a few others outlined below.

Veeam Backup for Office 365

Veeam’s released their first version of their SaaS email based backup for Office 365 last year and honestly people flocked to it!  With more and more companies migrating to the Microsoft hosted solution rather than putting up with the headaches of dealing with the multiple on-premises Exchange servers Veeam wanted to take advantage and help those organization protect their most critical communication asset!

With the version 1.5 announced just the other day, things like scaleablity have been added in order to horizontally scale your Office 365 backups by adding new proxies and repositories to help speed up the time it takes to pull down your O365 mailboxes.

In addition to this, automation has also been a focus.  With full PowerShell support coming into the product we can now use the easy verb-noun cmdlets to backup and restore Office 365 mailboxes.  And, more than just PowerShell – a fully supported RESTfull API is also available.

That said why stop at version 1.5 – let’s get moving on to 2.0   The Veeam community spoke – we loved the ability to back up the email and use the explorer functionality to perform restores back into mailboxes – but Office 365 is so much more – What about SharePoint?  What about OneDrive?

Well, Veeam now has an answer for those questions as they released somewhat of a roadmap for their Office 365 backup strategy, with both SharePoint and OneDrive on it in version 2.0.

Veeam Management Pack v8 update

For those Systems Center users that use Veeam Management pack to monitor and gain insights into their critical applications running on both VMware and Hyper-V you will be pleased to know that Veeam has released a slew of new features into the Veeam MP v8.  Now providing Azure backed dashboards in update 4, Veeam MP users will be able to update instantaneously.

Veeam has certainly announced a lot of things this week – with a heavy focus on cloud.  With Mark Russinovich doing the final keynote of the conference I can certainly say that cloud is most definitely the future – and data protection needs to be part of that!

Veeam announces the new Veeam Powered Network (Veeam PN)

VeeamOn-2017-2During the final keynote of VeeamON 2017 Veeam took the stage and threw down the guantlet on a brand new Veeam product release, the Veeam Powered Network, or Veeam PN for short.

Veeam PN is a new product, not a feature added to any others, which was initially developed to solve an internal issue within Veeam.  Veeam has a lot of employees and developers in remote sites all accross the world – and the pain of constantly connecting those sites together via VPN. coupled with the frustration of tunnels dropping all the time gave birth to the Veeam PN.  Kind of feels a lot like how a VMware fling comes to life, first being internal only, then released to the masses, then actually built out as an application offering.  Although VeeamPN can be used to establish this connectivity between any site at all,  the real benefits and the initial design intentions all focus on Microsoft Azure.

Veeam PN – Disaster Recovery to Microsoft Azure

Veeam PN is deployed into your Azure environment via the Azure Marketplace.  Once your cloud network have been established another virtual appliance is then deployed from veeam.com into your on-premises environments.  From there it’s as simple as setting up which networks you wish to have access into Azure and importing site configuration files that are automatically generated to your remote sites – with that, you have a complete and secure site to site tunnel established.  I’m not sure of the scale-ability of Veeam PN just yet, but I do know it supports having multiple sites connected into Azure for those ROBO situations.  For those remote workers on the road, well they can simply connect into Veeam PN and download a configuration file that simplifies their setup of the Open VPN client to establish a client-to-site VPN.

VeeamPN

So at this point you may be thinking “Why would Veeam develop this tech focused around networking and what does it have to do with backup or DR?”  Well, let’s couple this together with a little feature Veeam has called “Direct Restore to Microsoft Azure”.  By recovering our VMs and physical endpoints directly into Azure, and then easily establishing the network connectivity using Veeam PN we can now leverage true DR in the cloud in an easy to use, scale-able, and secure way.  THis is the “nirvana” of recovery that we have all been looking for.

One more thing – it’s free!

There it is – the Veeam way!  They released Backup and Replication with a free tier, Windows/Linux endpoint agents free, Direct Restore to Azure – free, the explorer tech – free!  Let’s add one more to that list!  Veeam PN is absolutely free!  And even though they have talked a lot about it being leveraged for Azure, companies and organizations can essentially use this technology to connect any of their sites and clients together – absolutely free!

Details around any betas or GA haven’t been revealed yet but keep your eyes open and I’ll do my best to help spread around any opportunity for you to get your hands on the new Veeam Powered Network!

A glimpse into #VeeamVanguard day!

Sure the Veeam Vanguard program comes complete with tons of great swag and free trips to VeeamON and whatnot – but in all honesty the biggest benefit of the program in my opinion is the access that Veeam provides – access with fellow Vanguards and access with key people within Veeam, across the whole company from executives to SEs.  Here at VeeamON 2017 we get a special day jam packed full of access – and below is a bit of a lowdown on what happened (or as much as we can tell you about anyways).

Veeam Availability Orchestrator – Michael White

The day started off with a couple of hours with Michael White (@mwVme) giving the low down on Veeam Availability Orchestrator – One of Veeams newest products which helps orchestrate and automate disaster recovery fail over.  Before actually getting into any product specifics, Michael actually went through a brief discussion about what Disaster Recovery and Business Continuity actually is, and how we can best prepare for any situation that may occur.  Michael is a perfect fit to evangelize this product as he had a lot of examples with other companies he as worked for over the years, outlining how he was prepared, or at sometimes, un prepared for disasters that hit.  In all honesty it was a great way to start the day, getting a little bit of education rather than just immediately diving into product specifics!

Veeam Availability Console – Clint Wyckoff

Directly after Michael we had Veeam evangelist Clint Wyckoff come and in and give us a breakdown on the new release candidate of  Veeam Availability Console.  I’ve seen the product before, but like anything Veeam there is always a number of changes in a short time – and it was nice to see the product as moves into tech preview.  For those that don’t know, VAC is Veeams answer to a centralized management solution for those large, dispursed enterprises as well as Veeam Service Providers to manage, deploy, and configure both their Veeam Backup & Replication servers as well as the newly minted Veeam Agents for Microsoft and LInux.

Vanguard support from the top down

One great thing that I like about the Veeam Vanguard program is that it’s not just a “pet project” for the company.  During Vangaurd day we were introduced to Danny Allan, VP of Cloud and Alliance Strategy at Veeam.  Danny is our new executive sponsor at Veeam – meaning we have support at the highest levels of the company.  It’s really nice to see a company sink so much support and resources from all roles into a recognition program – one of the many reasons why I feel the Vanguard program is so successful.

Nimble

After lunch we had Nimble come in and brief us on their Secondary Flash Array and the interesting performance enhancements it has when being used with Veeam.  Last year during our Vanguard day we didn’t have another vendor other than Veeam present.  It’s nice to see some of Veeams partners and ecosystem vendors reaching to find some availability to talk with us.  Nimble certainly has a great product – and due to the fact that I’m not sure what all was covered under NDA I’ll simply leave this at that!

AMA with Veeam R&D

Earlier when I mentioned one of the biggest benefits of the Vanguard program was access this is basically what I was referring too.  For the rest of the afternoon we basically had a no holds barred, ask me anything session with Anton Gostev, Mike Resseler, Alexy Vasilev, Alec King, Vladimir Eremin, Dmitry Popov, and Andreas Newfert – all Veeam employees who manage, or work very closely with R&D deciding what features are implemented, when they get implemented, and basically define a road map for when these features get inserted into products.  Now this session was definitely NDA as a lot was talked about – but just let me say this was the best and most interesting portion of the whole day!

With so much being under NDA and embargo there isn’t a lot I can tell you about the content – but for those wondering this is just a brief description of how much access you get into Veeam being under the Vanguard label.  Certainly if you wish, I encourage you to apply for the program – you won’t regret it!

Veeam Availability Suite v10 – what we know so far…

Although we got a hint at some of the announcements coming out of VeeamON during partner day on Tuesday it was really the general session Wednesday morning which brought forth the details surrounding what Veeam has in store for the future.  In true Veeam fashion we yet even more innovation and expansion into their flagship Veeam Availability Suite – covering your data protection needs from all things virtual, physical, and cloud.  So without further ado let’s round up some of which we saw during the Wednesday keynote at VeeamON 2017.

 

Veeam Agent Management

It’s no surprise that as soon as Veeam released their support for protecting Windows and Linux physical workloads that customers and partners all begged for integration into VBR.  Today, we are seeing just that as Veeam has wrapped a very a nice management interface around managing backups for both our virtual machines, along with our physical Windows and Linux workloads.  This not only gives us the ability to manage those physical backups within VBR, but also gives us the ability to remotely discover, deploy, and configure the agents for the physical endpoints as well!

Backup and restore for file shares.

Veeam Availability Suite v10 brings with it the ability to backup and restore directly from our file shares.  Basically those SMB shares can be accessed via a UNC share and files backed up and protected by Veeam.  Different from Veeams traditional restore point though, Veeam Backup and Restore for file shares doesn’t necessarily store restore points, but acts almost like a versioning system instead – allowing administrators to state how many days they would like to version the files, whether or not to keep deleted files, and also specify some long term retention around the file.  This is some pretty cool feature set to be added to v10 and I can’t wait to see where this goes – whether the file share functionality can somehow be mapped to the image level backup and work together to restore complete restore points as well as apply any newer file versions that may exist.

Continuous Data Protection for all

Perhaps some of the most exciting news of all is Veeams announcement to support Continuous Data Protection, allowing enterprise and organizations to drastically lower the RPO by default to a whopping 15 second restore point.  Ever since Veeam hit the market their replication strategy has been to snapshot VMs in order to gain access to CBT data and replicate that across.   That said we all recognize the pain points of running our infrastructure with the impact of snapshots.  That’s why, with the new CDP strategy set forth by Veeam today they will utilize VMware vSphere’s Storage APIs for I/O filtering in order to intercept and capture those IO streams to our VMs and immediately replicating the data to another location.  This to me is a huge improvement for an already outstanding RTPO that organizations can leverage Veeam to achieve.  This is truly groundbreaking for Veeam as we can now say have 4 hours of 15 second restore points to chose from.  It’s nice to see a vendor finally take advantage of the APIs set forth by VMware.

vCloud Director Integration into Cloud Connect.

Veeam service providers have been providing many customers the ability to consume both backup and replication as a service – allowing customers to essentially ship off their data to them, allowing the SP to become the DR site.  That said, it’s always just been those VMs that live within just vCenter and vSphere.  Today Veeam announced the support for vCloud Director organizations and units to also take advantage of the Cloud Connect offering – allowing those running vCloud Director to also consume the DR as a Service that Veeam partners have been providing, keeping their virtual datacenters and hardware plans while failing over their environments.

Veeam Availability for AWS

Yeah, you heard that right!   We have seen Veeam hit the market focusing solely on virtualized workloads, slowly moving into the support of physical workloads – and now, supporting the most famous well know public cloud – Amazon AWS.  Cloud always presents risk into an environment, which in turn means that we need something exactly like Veeam Availability for AWS to protect those cloud workloads and ensure our data is always recoverable and available if need be.  In true Veeam fashion though, the solution will be agentless.

Ability to archive older backup files

Veeam v10 now brings with it the ability for us to essentially archive off any backup files as they age in our backup policies off to some cheaper storage.  Now we all know that cloud and archive storage is a great solution for this so guess what – yeah, we know have the ability to create what is called an “Archive Storage” repository which can live on any type of native object storage, be it Amazon or even your own swift integration.  This frees up your primary backup storage performance in order to manage things such as restores, etc. – while the archive storage can do what it does best – hold those large, lesser accessed backup files.

Universal Storage Integration API

For the last few VeeamON events the question of who the next storage vendor would be to integrate into Veeam is always on everyone’s mind.  With the announcement of the new Universal Storage Integration APIs the next storage vendor could literally be anyone.   This is basically an API set that will allow storage vendors to integrate into Veeam – basically giving Veeam the ability to control the array, creating, deleting and removing storage snapshots allowing customers to lower RTO and RPO without ever leaving the familiar Veeam Console.

This honestly just scrapes the surface on some of the announcements Veeam has in store for us this week so stay tuned as there is another keynote tomorrow where I’m sure we will hear more about VBR v10 and also, possibly some NEW product announcements.  For now, it’s off to some deep dives to learn some more about some of these great features!  Thanks for reading

Veeam Availability Orchestrator – Automation for your Disaster Recovery

As a member of the Veeam Vanguards here at VeeamON 2017  we got to spend a couple of hours with Michael White (@mwVme) who gave us an update on Veeam Availability Orchestrator – Veeams’ answer to orchestrating and automating fail-over to their replicated VMs.  Michael certainly is a great choice when looking for someone to evangelize this product as he had a number of examples of DR situations he has either helped with, or orchestrated companies through – which had both good and bad outcomes!  But back to topic – VAO was announced a while back, in fact, over  a year ago Veeam announced their plans for VAO during their “Next big thing” event in April of 2016.  Since then I’ve got to see the application move along through various beta stages and was pleasantly surprised to see how the product has matured as they gear up for their 1.0 release (no, I don’t know when that is).

For those not familiar with VAO let me give you a little bit of a breakdown.  VAO is essentially a wrapper, or an engine that simply interacts with other Veeam products via API calls.  Think Veeam ONE, Veeam Business View, and Veeam Backup & Replication all talking together to one centralized Disaster Recovery Orchestration machine.  As far as the architecture there really isn’t anything special – it’s a web interface, with a SQL backend.   As far as I know the only limitations associated with Veeam Availability Orchestrator are the fact that it is only supported within a VMware environment and that an Enterprise Plus license must be applied to the VBR instance VAO connects to.

So what does VAO do that VBR doesn’t?

Hearing the phrases like “testing our replicas” and “using the Virtual Labs” you might be wondering what exactly VAO does that VBR doesn’t.  I mean, we have the SureReplica technology within VBR and it works great at testing whether or not we can recover so why would we need VAO?  The answer here is really about the details.  Sure, VAO doesn’t re-invent the wheel when it comes to DR testing – why would they force you to reconfigure all of those Virtual Labs again?  They simply import them, along with a lot of information from VBR to use within VAO.  That said though, VAO does much much more.  From what I’ve seen we can basically break VAO down into three separate components.

Orchestration

VAO takes what you have already setup within VBR and allows you to automate and orchestrate around that.  Meaning we already have replicated our VMs to a DR location, setup our fail-over plans and virtual labs, and completed configuration around re-iping and post fail-over scripts to handle our recovery.  VAO takes all of this and adds flexibility into our recovery plans to execute and trigger pre and post fail-over scripts, along with per-VM testing scripts as well.  At the moment we are limited to just PowerShell, however we may see more scripting languages supported come GA time.   Essentially VAO gives us more flexibility in running and trigger external process during a fail-over even than what VBR provides on its’ own.

Automated DR Testing

VAO takes all of this fail-over orchestration and applies this to our testing environments as well.    By giving use the ability to test, and test often we, as organizations can drastically increase our success rate when a true disaster occurs.  Certainly virtualization has really impacted our ability to test DR plans, in a good way – but there are still a lot of challenges when it comes to performing a true test – VAO closes that gap even more.

Dynamic Documentation

Probably the biggest feature in my opinion of VAO is it’s ability to automatically and dynamically create Disaster Recovery documentation.  DR documentation are often overlooked, and left sitting on some file server, stale and not updated at all.  Environments today are under constant change, and when our production environments change so do our DR requirements.  VAO does a good job at dynamically pulling in any new VMs added or older VMs removed and adjusting it’s documentation accordingly.  In the end we are left with some nicely updated documentation and run books to reference when we the time comes that we need them.

All of this said though, to me the true value of VAO really is it’s ability to focus on the details.  From what I’ve seen VAO does a great job at reporting any warnings, errors or failures as it applies to any DR test or fail-over event.  Not just on its’ canned testing scripts (for instance connecting to a mailbox on a failed over exchange server), but on our custom built PowerShell scripts as well.  Without this attention to detail a lot of assumptions and false positives can be “assumed” during a DR test – leaving us left with an inconsistent state during an actual fail-over event.  VAO, in all of its reporting and messaging certainly provides a nice mechanism into the visibility of each and every VM, and each and every task associated with that VM inside of a fail-over plan.

We still don’t have a solid release date on VAO but in true Veeam fashion let me give you this estimate – “When its’ ready” 🙂

What to expect from VeeamON 2017

I’ve had the opportunity to attend both the previous VeeamON conferences in Vegas as well as the mini VeeamON forum last year in the UK and since it’s still a relatively new conference on the scene I thought I’d give everyone a bit of an overview and heads up as what to expect from the event!  Before going to far into how the event is laid out let’s first take a look at the logistics.  While I do like Vegas it tends to get a bit monotonous when it comes to conferences – making them all kind of feel like the same event.  That’s why I was ecstatic to hear that VeeamON 2017 will be held in New Orleans this year from May 16th through the 18th!  So, as Veeam embarks on its’ third VeeamON event I thought I might go over a bit on what to expect for those that may be unfamiliar with the backup vendors availability event.

Expect A LOT of technical information

With over 80 breakout sessions you can most certainly expect to learn something!    The thing about the breakouts in VeeamON though is their level of technicality.  I’ve been to many breakout sessions at other conferences that tend to be pretty marketing heavy – while VeeamON most certainly has a marketing agenda, the sessions themselves are very technical – with a 100 level being the least technical and a 400 level introducing you to things you never even knew existed!  I can honestly say that I was skeptical when attending my first VeeamON – wondering how they could have so many breakout sessions dealing solely with backup – man was I wrong!  Veeam B&R is a big application that touches a lot of different aspects of your infrastructure – think Repository best practices, proxy sizing, automation, best practices, etc.  This year with the addition of new products such as 0365 backup, Agents for Linux/Windows and the many storage integrations with partners you can bet that there will be plenty of content to be shared.

Expect a smaller, more intimate conference

VeeamON, compared to the bigger conferences is relatively small.  With roughly 2500 people in attendance last year and over 3000 expected this year the conference is not as spread out as what you may be used to – which is a good thing!  Honestly, it’s nice being able to keep everything relatively confined to the same space and even nicer to have no crazy lineups to cross the street at the Moscone.  I found that VeeamON made it very easy to find people – whether you are looking for that person or not.  Meaning, don’t be surprised to accidentally run into some Veeam executives in the hallways – or even the CEO in the elevator Smile  The atmosphere during the conference days at VeeamON is nice – not so loud that you can’t have a conversation – the solution exchange isn’t over run with vendors competing to see who has the loudest mic.  It’s a nice, low key conference which makes it easy to have those valuable hallway conversations that are usually the best benefit from any conference.

Expect to learn a little more about the “other hypervisor”

VMworld – the place you go to learn all there is to know about vSphere.  MS Ignite – the place you go to get all your Hyper-V knowledge!  VeeamON – since Veeam B&R supports both vSphere and Hyper-V you are going to hear a lot about both the hypervisors.  You’ll see your typical VMware crowd intermingling with…you know, the other guys,  all in support of the product that is protecting their infrastructure.  I’ve wrote about how the Vanguard program bridges this gap before – and the VeeamON conference is fairly similar in how it brings together the best of both the vSphere and Hyper-V worlds.  As my good friend Angelo Luciani always says “We are all in this together!”

Expect announcements!

This is a given right – every vendor organized conference is always organized around some sort of announcement or product release!  VeeamON 2014 saw the introduction to Endpoint Backup Free Edition, while VeeamON 2015 saw it’s OS counterpart announced with Veeam Backup for Linux!  All the while lifting the lid on some major enhancements and features in their core product Veeam Backup & Replication.  So what will we see this year in New Orleans – your guess is as good as mine.  Veeam just recently had a major event where they announced the evolution of the physical Windows/Linux backup products (Veeam Agent for Windows/Linux) into paid versions coupled with the Veeam Backup Console for centralized management of our endpoints – as well, we saw the release of  Veeam Backup for O365 – What else is left to announce?  I’m sure we will hear more about v10 and some top secret features from it but with all of the other new product announcements one might think there is nothing left to release – but, a wise man who worked for Veeam once told me that they have this shelf containing a lot of products and ideas – you never know when they will take something down off of it Smile

Expect to have ALL your questions answered

Veeam sends a lot of employees, engineers, tech marketing folks to this conference – and I mean A LOT.  Last VeeamON you couldn’t even walk through the Aria casino without running into at least a half dozen Veeam engineers.  What this means is, if you have questions, VeeamON is the perfect venue to ask them.  I can pretty much guarantee you that they will all be answered – there will be a SME on site dealing in the areas you are having trouble with.  So don’t just make VeeamON all about learning – try and get some of those pain points that have been bugging you for a while firmed up while at the conference.  Everyone is approachable and more than willing to give you a few minutes.

Expect an EPIC party

Sometimes you just have to let go right – If you have ever been to a Veeam party at any of the VMworlds you know that Veeam knows how to do just that!  In fact, I’ve heard more than once Veeam being described as a “Drinking company with a backup problem” Smile  I don’t quite see it as being like that but certainly you have to agree that Veeam knows how to throw a party and make you feel welcome.  Whether you are just arriving and hitting up the welcome reception or you are attending their main VeeamON party I know you will have a good time, with good food and good drinks!  Veeam understands that it can’t be all about business all the time – so take the opportunity at the parties to let a little loose and meet someone new!  I’ve made many lifelong friends doing just that!

So there you have it!  Hopefully I’ve helped paint the picture of what VeeamON is like for me and maybe helped you understand it a little more!  I’m super excited for VeeamON in New Orleans this May and I hope to see you there!

Did you know there is a Veeam User Group?

vuglogorevLike most of you I’ve been attending VMUGs for quite a while now and over the last few years I’ve helping out by co-leading the Toronto chapter.  Each and every one I attend I always get some value out of it – whether it’s from presenting sponsors, talking with peers, or just creepily listening to conversations from the corner – one of the challenges we seem to have is getting the “conversation” going – getting those customers and community members sitting in the audience to voice their opinion or even at times get up and do a presentation on something.  For our last meeting I reached out to Matt Crape (@MattThatITGuy) to see if he might be interested in presenting – Matt was quick to simply say yes – yes, but on one condition – do I want to come and present at his Veeam User Group?  So, with that a deal was cut and I headed out this morning to my first Veeam User Group.

Veeam User Group – VMUG with out the ‘M’

Matt runs the Southwest Ontario Veeam User Group (SWOVUG) – I’ve seen the tweets and blogs around the SWOVUG events taking place, and have always wanted to attend but something always seemed to get in the way – for those that know me I’m a huge Veeam user and fan – so these events are right up my alley.  So, I did the early morning thing again, battled the dreaded Toronto traffic and headed up to Mississauga for the day to check it out.

The layout of the meeting is somewhat similar to a VMUG meeting we have – two companies kindly supported the event; HPE and Mid-Range – and in return got the chance to speak.  HPE started with a short but good talk around their products that integrate with Veeam; mainly 3PAR, StoreOnce and StoreVirtual.  They also touched on HP OneView and the fact that they are laser focused on providing API entry points into all their products.

I’m glad HPE didn’t go too deep into the 3PAR integrations as I was up next and my talking points were around just that.  I simply outlined how my day job is benefiting from those said integrations; more specifically the Backup from Storage Snapshot, Restore from Storage Snapshot and On-Demand Sandbox for Storage Snapshots features.

After a quick, but super tasty lunch (Insert Justin Warren disclaimer post here) Mid-Range took the stage  Mid-Range is a local Veeam Cloud Connect partner offering DRaaS and a ton of other services around that.    Mid-Range did more than simply talk about the services they provide – they more-so went into the challenges and roadblocks of consuming disaster recovery as a service, then touched briefly on how Veeam and themselves could help solve some of those…

Finally to cap the day off we had David Sayavong, a local Veeam SE take the stage to talk to us about “What’s new in version 9.5?”.  David’s presentation was not just him up there flipping through slides of features, but more of a conversation around certain features such as ReFS integration and how all of the new Veeam Agents will come into play.  Just a fun fact for the day – the audience was asked who had already upgraded to 9.5 – and honestly around 1/3 of the room raised their hands.  That’s 33% that have already upgraded to a product that just GA’ed only 7 days ago – talk about instilling confidence in your customers.

Anyways I wanted to breifly outline the day for those that may be thinking of attending like I was, but haven’t yet set aside the time to do so.

But there’s more…

I mentioned at the beginning of the post that there is always struggles with getting people to “speak up” – this didn’t seem to be the case at the Veeam User Group.  I’m not sure what it was but conversations seemed to be flying all over the place – for instance, after I was done talking about the integration with 3PAR there was a big conversation that started around Ransomware and security.    Each presentation seemed more like a round table discussion than a sales pitch.  It truly was a great day with lots of interaction from the both the presenting companies and the audience – everything you want from user group.

The user group intrigued me – and maybe some day I’ll through my name in to try and get something started up on “my side of Toronto” – it’s Canada right – there’s only a handful of IT guys here so everything east of Toronto is mine Smile  For more information about the Veeam User Groups keep an eye out on the Veeam Events page and @veeamug on Twitter!  And to keep track of the SWOVUG dates I suggest following @MattThatITGuy and watching the swovug.ca site!  Good job Matt and team on a great day for all!