Monthly Archives: June 2016

Veeam Agent For Linux beta – Notification script

So you’ve gone ahead and started to test out the new Veeam Agent for Linux beta in your environment – finally, a Veeam solution to those physical Linux servers that seem to always hang around in the datacenter!  Wait, if you haven’t check my post on it here and sign up for the official beta here.   Anways we can schedule our jobs – that’s great, but there is only one problem – we currently don’t have any way of reporting on the success of failure of the jobs.

Now this is only a beta and we don’t know what a full version of this will look like – so I don’t want to go ahead and say that this is something the application is lacking as it could be baked in with a future build – but for now let me show you my workaround – how I scripted the ability to report on success of my jobs.

First up, don’t schedule your backup jobs within the VAL gui – Leave the ‘Run the job automatically’ unchecked while you are setting up your job as we have below.  Instead, we will write a script that will first kick off the job for us, then store the session id variable so we can later send an email on its success.   We will create our own cron entry to handle this rather than using Veeam’s scheduler.


So how do we go about creating this script?  Well, before getting to far into it I suggest you head over to Dmitry Kniazev’s blog and read his post on the Veeam Agent for Linux (Serious Mode) as it does a good job at explaining some of the cli commands such as veeamconfig that come with VAL.

As always let me post the full script first, then break it down line by line for you….  Before getting to far in though there are a few quirks!  First off,  you can see I dump some information and create some files – all of this is done in the /home/ubuntu/ directory – feel free to change this to whatever you like – I didn’t take the time to variabalize all of this – sorry!  Secondly, the script uses sendmail – so be sure you have setup properly as well!!!!  Anyways, the script

Sometimes throwing bash up into a blog post really messes things up so if you are having issues copying/pasting you can simply pull down the complete file  here!

#start the veeam job
sudo veeamconfig job start --name $MYJOBNAME > /home/ubuntu/sessionid
#get sessionid from return text
SESSID=$( grep -r "Session ID" /home/ubuntu/sessionid  | cut -d " " -f3 )
SESSID=$( echo "${SESSID:1:${#SESSID}-3}")
echo "Job has started - Session: $SESSID"
#get current state (running/failed/success)
STAT=$( sudo veeamconfig session info --id $SESSID | grep "State:" | cut -d ":" -f2 )
#wait for job to not be running
while [ "$STAT" == " Running" ]
echo "Job still running..."
STAT=$( sudo veeamconfig session info --id $SESSID | grep "State:" | cut -d ":" -f2 )
sleep 5
echo "Job has completed in some sort of fashion - now what to do....."
#get some information on the session
veeamconfig session info --id $SESSID > /home/ubuntu/sessioninfo
veeamconfig session log --id $SESSID > /home/ubuntu/sessionlog
#extract some variables for email.
BACKUP=$( cat /home/ubuntu/sessionlog | grep -o "Backed up.*" )
STARTDATE=$( cat sessioninfo  | grep "Start time:" | sed 's/\sStart time:\s/\ /g' | cut -f2 | awk '{print $1}' )
STARTTIME=$( cat sessioninfo  | grep "Start time:" | sed 's/\sStart time:\s/\ /g' | cut -f2 | awk '{print $2}' )
ENDDATE=$( cat sessioninfo  | grep "End time:" | sed 's/\sEnd time:\s/\ /g' | cut -f2 | awk '{print $1}' )
ENDTIME=$( cat sessioninfo  | grep "End time:" | sed 's/\sEnd time:\s/\ /g' | cut -f2 | awk '{print $2}' )
JOBNAME=$( cat sessioninfo  | grep "Job name:" | sed 's/\sJob name:\s/\ /g' | cut -f2 | awk '{print $1}' )
#build email
echo "From: $EMAILTO
Subject: VAL Job ($JOBNAME) - $STAT
MIME-Version: 1.0
Content-Type: text/html
" > /home/ubuntu/email.htm
echo "
" >> /home/ubuntu/email.htm
echo "" >> /home/ubuntu/email.htm
echo "
" >> /home/ubuntu/email.htm
echo "
" >> /home/ubuntu/email.htm
echo "
Veeam Agent for Linux job ($JOBNAME) completed with a status of $STAT

” >> /home/ubuntu/email.htm
echo ”

" >> /home/ubuntu/email.htm
#send email
cat /home/ubuntu/email.htm | sendmail -t $EMAILTO



Yeah, so as for how it all works let’s have a look!!!

Lines 3-6 simply declare some variables – change these to your prefered emails and most importantly, the name of your backup job!

Line 8 starts our backup job, redirecting text to a file

Line 11/12 gets the sessionID of the job we just started…

Lines 17 through 27 simply loop and wait until our backup job has actually completed running.

Line 30/31 dump some more information out about our backup, while lines 34 through 39 parse that information and hold it in some variables.

Line 41 through 77 build the email and send it!

And that’s it!!!!!

As far as scheduling grows channel your inner cronniness!  For instance to have the job run say every Monday through Friday at 1 AM we can add a entry to our crontab as follows

0 1 * * 1-5 /home/ubuntu/StartJob

Anyways – I didn’t spend a lot of time cleaning up this script – just wanted to throw something together to get some basic notifications going through within Veeam Agent for Linux!  Feel free to report back with any awesome modifications or enhancements you would like to add! 🙂  Thanks for reading!



Linux just got a whole lot Veeamier!

veeamlogoSome of the biggest news coming out of VeeamON 2015 was based around yet another new free product from Veeam – this time, tackling the Linux market with what was then dubbed to be called Veeam Backup for Linux.  You can read about my thoughts around the product when it was announced here – or all of my coverage from VeeamON here.  That said VeeamON is all in the past, and my predictions for Veeam Backup for Linux at the time were, well, let’s just say mostly incorrect 🙂

A new name – A new beta

Fast forward to present day and we now have a new name for the product – the newly renamed Veeam Agent for Linux has also been released into a public beta!  So if you fancy yourself some tab-completed, neck-bearded, command-line, bash-junky veeamy goodness you can go ahead and sign up for the beta yourself!  The bits will be handed out in a first come first-serve manner so I suggest you stop reading this and go ahead and sign up (and then come back of course :))

So what are we looking at?

To kick things off we have support for both Debian and RedHat based Linux distributions.  In true Veeam fashion of trying to make things as easy as possible there is now running any “make” or “install” craziness – the product simply comes shipped as a .deb OR .rpm packages depending on your preferred distribution.  As far as kernel support – so long as you are running 2.6.32 or higher you are good – which is a pretty hefty backport of kernel support for a new product in my opinion, supporting kernel releases back into late 2009!  Also supporting both 32 and 64 bit kernels I’d say we are well covered for a 1.0 beta!

Let’s give it a shot!

To kick things off I’ve setup an instance of Ubuntu server, running 14.04.  Now there are a number of prerequisites that we will need installed before we can successfully install Veeam Agent for Linux (VAL) – I’m not going to list them all here, you can see them in the VAL forum that has been created for the beta – think of the normal culprits like gcc, make, etc..  Now, you can go ahead and install these one by one – or you can simply do it the lazy way, by attempting to install the deb package, allowing it to fail, then running an apt-get -f install (shown below)


At this point you can go ahead and install the VeeamSnap package, then proceed to do the same excercise of “apt-get -f install” with the Veeam package itself to handle any prerequisites for it.

As far as where I’m going to place my backups for the purposes of this post it’s simply just sitting on a secondary drive that I have attached to my test VM – however you could, and most likely should attach some sort of NFS mount to get your backups off host.  In the future and I’m hoping not too long from now we will see some sort of Veeam Backup and Replication repository integration – similar to that we have seen with Veeam Endpoint for Windows.

Veeam have done an amazing job with the interface that they provide on Veeam Agent for Linux.  At times you might forget you are even on a command line as the UI is pretty advanced for bash 🙂  To kick things off and create our first job let’s go ahead and run “sudo veeam” as shown below and ‘C’ for configure…VeeamAL-configure

First things first – give the job a name.


As far as the backup mode goes I selected Volume Level – but if you are following along you see you have the option to also do the complete machine level or file-level as well.  Below I’ve selected my OS drive as my source.


Again, my destination is local and I’ve set it to maintain 14 restore points.


And, if you chose to do so you can set a schedule…


If you are a Veeam Backup and Replication user then you might be used to certain statistics when looking at running jobs – as you can see below, the bash UI that Veeam has created gives us the same look and feel as that of B&R – allowing us to see the Bottleneck and data statistics just as we do in B&R – this is a huge feature in my mind!


So with that we have successfully installed and configured the new Veeam Agent for Linux beta – also, we have created and successfully ran a backup job.  So far so good!  That said, backups are only half the battle – restorations is where its at!  So how do we restore files within the Veeam Agent for Linux?  First up, select your backup job and hit ‘R’.


Then, select the backup you wish to recover from (restore point) and hit enter – you should get a message stating that your backup has been mounted to /mnt/backup/ (as shown below)


From there you can simply exit the Veeam UI, navigate to /mnt/backup/ and restore by copying whatever files/directories you wish to wherever you wish…


Once you have completed restoring your files, simply go back into the Veeam Agent for Linux UI, select your Job Name and hit ‘U’ to unmount the backup from the machine.


So there you have it!  The first beta for the newly renamed Veeam Agent for Linux has arrived!  If you would like to help shape the product by providing feedback I’d recommend you go out and pull this beta down now and start playing around with it!    Thanks for reading!

Infinio leveraging VAIO with Accelerator 3.0

infinioJust this past Monday Infinio announced the general availability of the third iteration of their performance enhancing software, Accelerator 3.0.  If you haven’t heard of Infinio I would certainly recommend checking them out – They are one of really only a few companies providing true server-side caching for virtual machines.  On a side note one interesting fact about Infinio is that they actually came out of stealth during a Tech Field Day presentation – a pretty cool entry if you ask me!

Accelerator 3.0 marks the third major release from Infinio since they entered the market in 2013 – with each bringing something new and exciting to the table.  So what’s new this go around? – A few things caught my attention…

Built on VAIO


Along with vSphere 6.0 U1 came something called VMware VAIO, or more specifically vSphere APIs for IO Filtering.  This is essentially a framework – a framework provided by VMware allowing for third party vendors to insert their technology directly into an IO stream of a VM.  As you can imagine disrupting the I/O stream of a VM could potentially have performance implications.  By utilizing VAIO, third parties are now able to do this is in a safe and certifiable manner – using API calls and technology that is supported by VMware.  Accelerator 3.0, from what I can see is one of the first products to be certified on and utilize VAIO.  It should be noted however that even though VAIO was available with vSphere 6.o U1, Accelerator 3.0 requires vSphere 6.o U2!

VM-Level Acceleration


The Infinio I/O filter gets installed onto each host within your cluster, which then in turn provides acceleration on a per-VM basis, rather than accelerating complete datastores.  This type of functionality is also completely integrated within the Storage Policy-Based Management functionality – For instance with VVOLs we are able to dictate via policy which type of disk our VMs sit on along with many other array provided technologies they utilize – in addition to this Infinio Accelerator will allows us to attach policies to determine the cache properties and acceleration of these VMs.

Memory and SSD

Infinio has always been known for their use of server RAM for an acceleration medium.  In addition to this, Accelerator 3.0 gives customers the choice to offload cache to local SSDs and PCIe/NVMe flash as well, all the while supporting the acceleration of any type of storage be it SAN, NAS, DAS, VSAN, VMFS, vVOLs, etc.  In the end we have somewhat of a tiering effect for cache – RAM being the primary and as data is aged it flows down to flash – allowing us to have  fast performance for primary cache while having an option for a larger cache size.

While these were the features that certainly grabbed my attention its the performance that Infinio is highlighting.  Through it’s early adopters of Accelerator 3.0 Infinio has seen 1 Million IOPS per host, coupled with an astounding 20GB/sec of throughput!!  If you are interested in learning more about the product I suggest that you jump onto one of their 30 minute demos that they offering – with the next one occurring on June 23rd @ 1PM EST.  Also, if you need a little literature before hand you can also pull down the product whitepaper!  More of a hands on type of person – get your hands on a 30 day free trial here!  Infinio has always stressed that they are a risk free product – in the past I’ve installed and removed Infinio into live production environments without experiencing any downtime whatsoever – and I’m sure things haven’t changed with this release!


Want to try VVOLs? How about StarWind?

One of the biggest features that was included within vSphere 6.0 has to be that of Virtual Volumes (VVOLs).  VMware had been talking up VVOLs as far back as VMworld 2011, since then pushing out 4 new releases of vSphere – but it wasn’t until February of 2015 when VMware announced vSphere 6 that they could finally say that VVOL support has arrived.starwind-vvols

The hype around VVOLs is validated – To put it bluntly they simplify the life of the vSphere Administrator.  Instead of engaging the storage team to deploy LUNs and datastores the vSphere Administrator can simply now apply a storage policy to a VM, specifying capabilities and requirements around snapshotting, dedupcation, raid level etc – the policy then talks to the storage array and places the VM on the disks that can fulfill that policy.  On the flip side, the storage administrators no longer have tune different LUNs and disk groups to meet requirements set forth by the virtualization team – they simply allocate storage which can be consumed by VVOLs.

VVOLs really has only two main requirements

  • You need to be running vSphere 6.0 or higher
  • Your storage array needs to have built in the support for VVOLs

Even though there are only two requirements one could say that they are pretty big ones – Firstly you may not have upgraded to vSphere 6 yet in production, thus eliminating the ability to deploy VVOLs.  For those with home labs – although they may already be running vSphere 6, they may not have access to an array which supports VVOLs – so that is out as well.

This is where StarWind Virtual SAN comes into play.  Although it wasn’t’ till 2011 when they rebranded to the Virtual SAN name, StarWind had been providing their software based iSCSI shared-storage solution for quite some time.  The software runs on Windows, and supports a lot of the features that you might find in comparable physical arrays.  Features such as Caching, High Availability, Fault Tolerance, Scaling characteristics, deduplication, compression, replication and snapshots are all built into StarWinds Virtual SAN offering.  Another feature (which is currently in Technical Preview) as you may have already guessed is VVOL support!  This means we don’t need expensive arrays in order to try out VVOLs, we can simply use a Windows server with some attached local storage and StarWinds’ Virtual SAN.

How to get started with StarWind VVOLs

The first thing you will need to do is download the technical preview of the StarWind VSA, or Virtual Storage Appliance.  The VSA is the easiest and quickest way to get StarWind Virtual SAN up and running as it contains the complete installation (180 day trial) of Windows Server, pre-loaded and configured with StarWind Virtual SAN.  All you need to do is deploy it and attach it to your proper networks.  You can download the latest version of the VSA (with VVOL support) here.  Due to Microsoft licensing you will find that the VSA comes as a preconfigured VM for Microsoft Hyper-V, however if you want to use it with vSphere (as I did) it also handily comes packaged with the StarWind V2V converter which will allow you to convert the vhdx files into vmdks for deployment to ESXi.

Since StarWind utilizes iSCSI as it’s transport method you must set up iSCSI initiators on your ESXi hosts.  If you already have configured hardware or software initiators you can go ahead and use them, just ensure network on the VSA is on the same subnet.   If you haven’t I’ll explain the process below…

The first thing we will need to do in order to connect our hosts to the VSA through iSCSI is to setup some basic networking.  Follow the ‘Add Host Networking’ wizard using the following options…

  • Connection Type – VMkernel Network Adapter
  • Target – New Standard Switch
  • Physical NICs – any free physical NIC that is connected to the storage network
  • None of the services will need to be enabled
  • Give your VMkernel a proper IP and subnet in order to reach the StarWind VSA storage network

Once the networking is setup we need to enable your software iSCSI imitator on your host.  From the Manage-Storage tab within the vSphere Web Client click the ‘+’ icon while on the Storage Adapters section to add the iSCSI initiator.   From that same screen select your newly added iSCSI adapter and then select the Network Port Binding tab.  Here is where we will bind our imitator to the vSwtich we created earlier.  Click the ‘+’ icon and bind the adapter to the proper VMkernel network as shown below…


Once completed we can go ahead and add the StarWind VSA as a target within our initiator – again, this is done in the storage adapter section, but on the ‘Targets’ tab as shown below…


Give the host adapters a quick storage rescan to allow them to connect to the StarWind VSA.  Once the StarWind VSA and the ESXi hosts have been configured its finally time to dive into the setup for VVOLs.  VVOLs relies on VASA 2.0 in order to allow for communication between vCenter and the Storage Arrays.  Thankfully configuring this is as simple as passing a url and some credentials to vCenter.

But before we get too deep there are some commands we may need to run depending on how your environment is setup – You may need to rebind the MAC on the StarWind VSA to its corresponding certificate.  For instance, if you are planning on connecting the StarWind VSA via IP address (such as I did below) then you should run the following command on your StarWind VSA to reset the VASA certificate.

wmic -namespace:\\root\starwind path STARWIND_ClusterService call ResetVASA BindToInterfaceMAC=<MAC ADDRESS>

Keep in the mind the MAC address that you want to place in the above command would be the MAC address of your VSA’s management interface.

With that out of the way it’s time to register our StarWind VSA as a storage provider within vCenter.  From the Manage tab with the target vCenter server in context, select the Storage Providers section and then “Register a new Storage Provider”.  From here we need to provide the StarWind VASA URL (https://<IP_OF_StarWind>:9991/vasa/) and the default username and password (root/starwind) as shown below…


With our Storage Provider registered and our target server identified we now have a couple of the building blocks in place in order to use VVOLs – the last thing we need to do is configure VVOL datastore – essentially a container that we place our VM disks on within vCenter, which will in turn instruct the array to create its own VVOLs.  Adding a VVOL datastore is similar to the process as adding a regular vmfs datastore, only selecting VVOL as the storage type as shown below.

From there the only other requirement is selecting the StarWind array as the backing storage container.  If all of the steps were performed properly you should see it appear as a valid backing storage container as shown below…


Once this is done we can start playing around with VVOLs and the StarWind implementation of them.  For instance, by creating a new VM and selecting our VVOL datastore as the storage we can see that instead of the traditional files being placed within an image file inside of Starwind, that new image files are created – one for the configuration of the VM, and another for the disk of the VM (see below)


The creation of the ImageFiles is done by StarWind and it is automatically processed providing you pick your VVOL datastore as your storage target.  Shown below we can see how things change once we power on our VM, again, another image file is created, this one holding the storage for the swap file.  When the VM is powered off this ImageFile will be automatically deleted.


At this point we can confirm that VVOLs are indeed working the way we expected them to – instead of a VM residing inside files on top of a LUN, the components of the VMs are essentially each their own LUN.  This is some great technology that will definitely change the way we deploy storage; however we aren’t done yet – we still need to look at a little concept called VM Storage Policies.

VM Storage Policies is a concept that basically allows us to define policies – these policies in turn dictate how we utilize different features and components that the array provides.  These policies are then attached to VM disks, and will automatically place the disks of the VM on the proper chunk of storage depending on the policy requirements.  Take the below image for example, we can define a new policy that will define whether we would like the disk to be deduplicated, thin provisioned, replicated, as well as the array caching requirements.  These capabilities are all provided and performed by the StarWind VSA.


To create a VM Storage Policy it’s a matter of clicking ‘Policies and Profiles’ –> VM Storage Policies and selecting to ‘Create a new VM storage policy’.  We then define our rule-sets based on the characteristics of storage we want.  Creating the policy is only half the work however – we still need to assign the policy to our VM.


As you can see above this is done within the ‘Add New VM’ wizard.  When on the ‘Select Storage’ step instead of simply selecting our VVOL datastore as we have done previously we assign a VM Storage Policy to the VM.  In the example above we selected ‘Gold’ and we can see that our VVOL datastore that we created earlier is indeed compatible with that policy – meaning it meets the requirements that we defined within the policy’s rulesets.  Also, as shown below we can assign different storage policies on a per disk basis within the VM.  This allows us to support use cases such as a backup drive.  Maybe we have one storage policy dictating faster disks to host our OS VVOLs, and another that defines slower disks such as Nearline for our backup drives.  Either way we can mix and match policies per VM.


So as you can see you don’t need to purchase expensive hardware if you are just trying to get a look and feel for VMware and VVOLs.  The technical preview of the StarWind VSA will give you most, if not all the functionality you need to get a feel for how things are going to work, using existing local storage you already have in place.  Certainly StarWind is not done with their VVOL implementations either, they are constantly providing updates to the Technical Preview for you to check out – watch for things such as snapshot support, etc coming in the near future.  For now, get a head of the game and get the technical preview up and running for yourself so you can see what all the VVOL hype is about!