Tag Archives: Powershell

Consuming the Veeam REST API in PowerShell – Part 1 – Starting a job

Since the version 7 release of Veeam Backup & Replication all of the typical Enterprise Manager functionality has been exposed via an XML REST API.  Being a pretty hefty user of vRealize Orchestrator this has proven to be extremely useful to me when looking to automate certain parts of my infrastructure.  Now that said, there are times when vRO is simply out of reach – or when the person I’m creating the automation for is simply more familiar with PowerShell.  Now, I understand that Veeam Backup & Replication does come with PowerShell support and what I’m about to walk through may be somewhat redundant as they have their own cmdlets built around certain tasks – but this crazy IT world we live in is changing and REST seems to be at the helm of that.  We are seeing more and more vendors first creating a REST API and then consuming that themselves in order to provide customers with a GUI front-end.

So, in the spirit of learning how to work with the Veeam REST API I decided I’d take the time to document out how to perform some of the sample functions within their API reference using nothing but PowerShell.   This first post, will deal solely with how to start an existing Veeam Backup & Replication job.   Keep in mind the shear nature of REST is that although the bodies and headers may change, the process of consuming it is relatively the same no matter what the application – so there is some valid learning to be had regardless of the end product.

PowerShell and interacting with REST.

Before jumping right into Veeam specifics we should first discuss a few things around the PowerShell cmdlet we will need to use – as well as specifics around the Veeam Enterprise Manager REST API itself.  REST APIs are nothing more than simple http requests sent to an endpoint – meaning they are consumed by simply sending a request, be it a GET, PUT, POST, etc. – whatever the API supports to a uri.  From there, the API takes a look at what was passed and returns back what it normally would with an http request – a header, a status code, and a body – Its this response that we need to parse in order to discover any details or information pertaining to our request – it lets us know whether or not the operation was successful, and passes back and valid data as it relates to the request.  Now, in Veeams case they use an XML based API for Enterprise Manager.  This means we can expect to see the response body in an xml format – and, if at all we need to create a body to pass to the request, we would need to first form that body in an xml format before we sent it!  Now all of this sounds kind of difficult – but in the end it really isn’t – and you will see that as we create our first script!  Really, there are two key PowerShell specifics we are using….

  • Invoke-WebRequest – this is the cmdlet in which we use to send the API call, passing a uri, method, and sometimes a header
  • XML – this is a simple way to take our response and label/cast it as xml in order to more easily parse and retrieve the desired information from it

So with that said, let’s get scripting…

First Step – Get a SessionId

The first step in any API consumption is usually authentication – and aside from the scopes and methods themselves this is normally where we see the most discrepancies between vendors.  With Veeam we simply send a POST request to the sessionMngr resource type and retrieve a sessionId.  It’s this sessionId which will then need to be included within the header of all subsequent requests to the API – this is how we are identified and authenticated.   Now you could send a get request to the root of the API scope and parse through all of the returned content to find a specific versions uri if you wanted – but I happen to know that we can simply use ?v=latest within Veeam to always use the latest and greatest version.  So let’s go ahead and authenticate against the API and retrieve our sessionId with the following code

$response = Invoke-WebRequest –Uri “http://localhost:9399/api/sessionMngr/?v=latest" -Method "POST" -Credential (Get-Credential)
$sessionId = $response.Headers["X-RestSvcSessionId"]

ps1[8]

Looking at the code above we are basically doing a couple of things – first,  we issue our request to the http://localhost:9399/api/sessionMngr/?v=latest to our uri, and also have the system prompt us for credentials as this will be performing the  actual authentication.  And lastly, we parse our returned Headers in the response in order to grab our sessionId.  So if all goes well, you should be left with a string in a similar format to the one shown below stored in our sessionId variable – and now we are authenticated and ready to start requesting…

Now let’s start that job!

So the first example in the REST API Reference is starting a specific job – to do this we first need to get the uri for the jobs resource.  Now we could go ahead and simply look this up in the reference guide as it has all the information (***hint*** its http://localhost:9399/api/jobs) – but where’s the fun in that?  The response we have just received from logging in has all of the information we need to grab the uri programmatically – and, should things ever change we won’t have to rewrite our code if we grab it from the response.  So, to get the proper uri we can use the following one-liner to parse our content as xml and find the correct child node…

$uri = (([xml]$response.Content).LogonSession.Links.Link | where-object {$_.Type -eq 'JobReferenceList' }).Href

Now that we have the proper uri we can go ahead and make a GET request to it to return a list of jobs within Enterprise Manager.  But, remember we have to pass that sessionId through the request header as well – so in order to do this we issue the following commands…

$response = Invoke-WebRequest -Uri $uri -Method "GET" -Headers @{"X-RestSvcSessionId" = $sessionId}

Again, our $response.Content will contain a lot of information, including all of our job names and subsequent metadata with them.  So, in order to find the proper uri for my job (Backup Scoreboard) I can  use the following command to once again retrieve the uri for our next call.

$uri = (([xml]$response.Content).EntityReferences.Ref.Links.Link | Where-object {$_.Name -eq 'Backup Scoreboard'}).Href

Once we have that – we again send a GET request to the new uri

$response = Invoke-WebRequest -Uri $uri -Method "GET" -Headers @{"X-RestSvcSessionId" = $sessionId}

Again, we get a lot of information when looking at our $response.Content – but let me somewhat format it for you below so we can see what we have…

ps1[10]

As you can see we have a few different Href’s available to grab this time – each relating to a different action that can be taken on our job.  In our case we are looking at simply starting the job so let’s go ahead and grab that uri with the following command…

$uri = (([xml]$response.Content).Job.Links.Link | Where-object {$_.Rel -eq 'Start'}).Href

And finally, to kick the job off we send, this time a POST request, using the uri we just grabbed…

$response = Invoke-WebRequest -Uri $uri -Method "POST" -Headers @{"X-RestSvcSessionId" = $sessionId}

Now if everything has went as intended we should be able to pop over to our VBR Console and see our job running.  Now wasn’t that way easier than right clicking and selecting Start Smile.  One thing I should note is that we can parse this body as well and grab our taskId for the job we just started – from there we can go ahead and query the tasks resource to figure out its’ status, result, etc..  For those that learn better by simply seeing the complete script I’ve included it below (and in fairness running this script is faster than right-clicking and selecting ‘Start’).  In our next go at PowerShell and the Veeam API we will take a look at how we can instantiate a restore – so keep watching for that…  Thanks for reading!

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
$backupjobname = "Backup Scoreboard"
#Log in to server
$response = Invoke-WebRequest –Uri “http://localhost:9399/api/sessionMngr/?v=latest" -Method "POST" -Credential (Get-Credential)
#Get Session Id
$sessionId = $response.Headers["X-RestSvcSessionId"]
# Get Job Reference link
$uri = (([xml]$response.Content).LogonSession.Links.Link | where-object {$_.Type -eq 'JobReferenceList' }).Href
# List jobs
$response = Invoke-WebRequest -Uri $uri -Method "GET" -Headers @{"X-RestSvcSessionId" = $sessionId}
# get specific job from list
$uri = (([xml]$response.Content).EntityReferences.Ref.Links.Link | Where-object {$_.Name -eq $backupjobname }).Href
#get job actions
$response = Invoke-WebRequest -Uri $uri -Method "GET" -Headers @{"X-RestSvcSessionId" = $sessionId}
#get start action
$uri = (([xml]$response.Content).Job.Links.Link | Where-object {$_.Rel -eq 'Start'}).Href
#Start job
$response = Invoke-WebRequest -Uri $uri -Method "POST" -Headers @{"X-RestSvcSessionId" = $sessionId}

Setting yourself up for success with Veeam Pre-Job Scripts

For a while Veeam has been able to execute scripts post-job, or after the job completes – but it wasn’t until version 8 of their flagship Backup and Replication product that they added the ability to run a pre-job script, or a script that will execute before the job starts.  When v8 first came out with the ability to do this I strived to try and figure out what in the world I would need a pre-job script for – and for the longest time I never used it in any of my environments.  If a job failed I would execute post job scripts to run and hopefully correct the reason for failure – but a while back it kind of dawned on me – and with a bit of a change in mindset I realized something – Why fail first?

veeamprejobscript

Why fail when success is possible?

As I mentioned above I’d grown accustom to using post-job scripts to correct any failing jobs.  For instance, there were times when for whatever reason a proxy would hold on to a disk of one of my replica’s – subsequently, the next run of this job would fail trying to access this disk – and even more importantly consolidation of any VMs requiring it would fail as the original replica couldn’t access the disk mounted to the proxy.  What did I do to fix this?  Well, I added script that executed post-job looking to simply unmount any disks off of my Veeam proxies that shouldn’t be mounted.

Another scenario – I had some issues a while back with some NFS datastores simply becoming inaccessible.  The fix – simply remove and re-add them to the ESXi host.  The solution at the time was to run a post-job script in Veeam.  If the job failed with the error of not being able to find the datastore then I ran a script that would automatically remove and re-add the datastore for me – Next job run everything would be great!

“Fail and Fix” or “Fix and Pass”

So, the two solutions above, while they do fix the issues they do it after the fact – after we have already failed.  Even though it fixed everything up for the next run of the job I’d still lose that one restore point – and sure enough, the time WILL come where it’s that exact point in time you will need to recover from!  The answer to all this is pretty simple – migrate your post-job scripts to pre-job scripts.  Let’s set ourselves up for success before we even start our job!  Although this may seem like common sense – for whatever reason it took a while before I saw it that way.

So with all that – hey, let’s add some code to this post.  Below you will find one of my scripts that runs before each Veeam job – my proactive approach to removing non-veeam proxy disks from the proxies!

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
Add-PSSnapin VeeamPSSnapIn
Add-PSSnapin VMware.VIMAutomation.core
 
Connect-VIServer vcenter.mwpreston.local -u username -pass password 
 
# get job name out of parent process id
$parentpid = (Get-WmiObject Win32_Process -Filter "processid='$pid'").parentprocessid.ToString()
$parentcmd = (Get-WmiObject Win32_Process -Filter "processid='$parentpid'").CommandLine
$jobid = $parentcmd.split('" "')[16]
$vbrjob = get-vbrjob | where-object { $_.Id -eq "$jobid" }
 
#get some info to build replica VM names
$suffix = $vbrjob.options.ViReplicaTargetOptions.ReplicaNameSuffix
$vms = $vbrjob.getObjectsInJob()
 
#create array of replica names
$replicasinjob = @()
foreach ($vm in $vms)
{
 $replica = $vm.name+$suffix
 $replicasinjob += $replica
}
 
#loop through each replica and check Veeam proxies for foreign disks
foreach ($replicaitem in $replicasinjob)
{
 $replica = $replicaitem.tostring()
 Get-VM -Location ESXCluster -Name VBR* | Get-HardDisk | where-object { $_.FileName -like "*$replica*"} | Remove-HardDisk -Confirm:$false
}
 
exit

So as you can see this is a simple script that basically retrieves the job name it was called from (Lines 7-10) – By doing it this way we can reuse this block of code in any of our jobs.  Then simply searches through all of the disks belonging to Veeam proxies (Line 28) – if it finds one that belongs to one of our replica’s we are about to process, it removes it.  Simple as that!  Now, rather than failing our job because a certain file has been locked, we have set our self up for a successful job run – without having to do a thing!  Which is the way I normally like it 🙂  Thanks for reading!

Running free VeeamZip directly from the vSphere Web Client

veam_thumb.pngThere are a lot of times I find myself needing to take a one-off backup job of a VM – prior to software upgrades or patching I always like to take a backup of the affected VM(s) in the event that, well, you know, I mangle things.  VeeamZip is great for this – it allows me to process a quick backup of my VM that is separate from its’ normal backup and replication routines.  Since I deal in an environment that running paid Veeam licenses I have access to the Veeam Plug-in for the vSphere Web Client – and this plug-in does exactly what the title of this blog post is – it allows us to perform VeeamZips of our VMs without having to leave the vSphere Web Client and/or log into our Veeam Backup and Replication console.

What if I’m using Veeam Backup and Replication FREE?

So this is all great for me, but I got to thinking – What if I wasn’t running a paid version of Veeam Backup?  What if I was simply running the free version – this doesn’t come with Enterprise Manager, therefore it doesn’t come with a means of getting the Veeam Backup and Replication Web Client plug-in installed – therefore no VeeamZip from the Web Client right? – Wrong!  Ever since Veeam Backup and Replication v8 U2 came out they have been including PowerShell cmdlets around the VeeamZip functionality.  I wrote about how to use it last year in Scheduling Veeam Free Edition Backups.  Well, since we have PowerShell that means we can use vRealize Orchestrator to build a workflow around it – and we have the ability to execute workflows directly from within the vSphere Web Client – so without ado, Running the free VeeamZip functionality directly from the vSphere Web Client.

First up the script

I didn’t get too elaborate with the script as you can see below.  This is simply a handful lines that take in a few parameters; the VM to backup, the destination to store the backup, and the retention, or autodeletion of the backup.

1
2
3
4
5
6
7
8
9
10
11
12
Param(
[Parameter(Mandatory=$true)][string]$VM,
[Parameter(Mandatory=$true)][string]$Destination,
[Parameter(Mandatory=$true)][ValidateSet("Never","Tonight","TomorrowNight","In3days","In1Week","In2Weeks","In1Month")][string]$Autodelete
)
#Load Veeam Toolkit
& "C:\Program Files\Veeam\Backup and Replication\Backup\Initialize-VeeamToolkit.ps1"
#Get the VM Veeam Entity.
$vmentity = Find-VBRViEntity -Name $VM
 
#VeeamZip it!
Start-VBRZip -Entity $vmentity -Folder $destination -AutoDelete $Autodelete -DisableQuiesce

That’s it for the script – simple right – feel free to take this and add whatever you seem fit to suit your needs 🙂

The Orchestrator Configuration

Before we get to creating our workflow there are a few things we need to do within orchestrator, mainly adding our server that hosts our Veeam Free instance as a PowerShell host within vRO.  But even before we run the ‘Add a PowerShell Host’ workflow we need to run a few winrm commands on the Veeam Free instance.  I have a complete post about setting up a PowerShell host here, but will include the commands you need to run below for quick reference.

First up, on the Veeam server run the following in a command shell…

  • winrm quickconfig
  • winrm set winrm/config/service/auth @{Kerberos=”true”}
  • winrm set winrm/config/service @{AllowUnencrypted=”true”}
  • winrm set winrm/config/winrs @{MaxMemoryPerShellMB=”2048″}

Next, from within vRO (as shown below) we can run the the “Add a PowerShell host” workflow…

PowerShellHost1

As you can see my Veeam Server is actually the same as my vCenter server – don’t recommend doing this but hey it’s a small lab!  Just be sure to use the FQDN of your Veeam Server for Host/IP column.

PowerShellHost2

Ensure that the remote host type is WinRM, and that the Authentication method is set to Kerberos.

PowerShellHost3

And be sure that we are passing our username in the ‘username@domain’ format, along with ‘Shared Session’ for the session mode.  Once you are done go ahead and click ‘Submit’.  If everything goes as planned your Veeam Backup and Replication server should be added as a PowerShell host within vRealize Orchestrator.

And now, the workflow!

Finally we can get to actually building our workflow.  If you remember our script we take in three parameters; VM, Desitination and AutoDelete – so we will mimic the same with our workflow, only calling them Input Parameters within vRO (shown below)

workflow1

Now since we will be using the built-in Powershell workflow ‘Invoke an External Script’ we will also need to have some workflow attributes setup in order to pass to that workflow.  Below you can see how I’ve setup mine…

workflow2

Your configuration may vary a little from this one, but as you can see we simply add a PowerShell host attribute and map it to our newly added host, as well assign the ScriptPath attribute to the representation of where we saved our little VeeamZip script earlier.  The arguments attribute can remain empty as we will only use this to build the arguments string to pass to the script.

workflow3

The first element we want to add to our workflow schema is a Scriptable task – go ahead and drag that over into your workflow.  This is where we will create our arguments string.

workflow4

As far as what goes into the scripting you can see I’ve simply brought in the arguments attribute, along with our three input parameters and simply chained them together into one string (arguments = ‘”‘+VM.name+'” “‘+Destination+'” “‘+AutoDelete+'”‘;), then ensured that my arguments attribute was included in the output as well.

workflow5

Next drag the ‘Invoke an external script’ workflow into your schema (you can see I’ve renamed mine ‘Run VeeamZip’.  Ignore all of the prompts regarding the setup of parameters that pop up – the easiest way I like to do this is by editing the workflow (the pencil above it) and using the ‘Visual Binding’ tab as shown below.

workflow6

Simply drag and drop your in attributes to their corresponding in attributes on the external script workflow, along with mapping your output to output.  Easy Peasy!

At this point you can go ahead and save and close your workflow – we are done with Orchestrator.  If you want to run the workflow a few times to test from within vRO go ahead – but the point of this post was to run it from within the Web Client so let’s move on to that step.

vRO and vSphere Web Client

I love vRealize Orchestrator and I love the fact that I can contextually execute custom workflows from within the vSphere Web Client.  To do this you need to first register your vRO instance with vCenter – this should be automatically done for you depending on you set everything up – I’m not going to get into that configuration today.  To get to our context mappings we need to click on Home->vRealize Orchestrator.  With the vRO Home in context select the ‘Manage’ tab and then ‘Context Actions’.  We then are going to want to hit the little green + sign to add a new workflow context map.

webclientworkflow1

As far as the next steps they are pretty self explanatory – navigate through your vRO inventory to your workflow, click ‘Add’, and select ‘Virtual Machine’ from the types box.  This is what will allow us to right click on a VM and run our VeeamZip, passing the contextually selected VM to the workflow’s VM input parameter.  Click ‘OK’ and it’s time to VeeamZip!

Now when you want to run the workflow you can simply right click a VM and navigate to (in my case) All vRealize Orchestrator Actions->VeeamZipVM

execute1

As you can see our workflow will start, using our VM selected as the VM input, and simply prompt us for the destination and AutoDelete settings.

execute2

And there you have it!  We can now use the Free version of Veeam Backup and Replication to VeeamZip our VMs directly from within the vSphere Web Client.  In fact, our workflow will even show up within our vSphere tasks so we can monitor the status of the job.  Now, there is no error checking or anything like that…yet!  Let me know if you have any questions, concerns, etc… always happy to read the comments!

Adding Veeam Proxies to jobs via Powershell

veeamlogoThere will come a time in every growing environment when you need to scale your Veeam Backup and Replication deployment to help keep up with ever increasing demands as it pertains to backing up all those new virtual machines.  Veeam itself has a couple of different deployment models when it comes to scaling – we can scale up – this is done by adding more CPU and RAM to our current proxies and increasing the number of maximum concurrent tasks that our proxies can process – a good rule of thumb for this one is dedicating a CPU per task, so 2 concurrent tasks = 2 CPU’s.  Another option when it comes to scaling Veeam is scale out, which is done by building and adding additional Veeam proxies into the fold.  Which one you chose is completely up to you however in my experience I’ve had a better experience by scaling out and adding more Veeam proxies into my infrastructure – why?  Not really sure, I just don’t like having more than 2 or 3 processes hitting any one proxy at the same time – just a preference really…

If we have accepted defaults when creating our backup/replication jobs they should be set to ‘Automatic selection’ as it pertains to our Backup Proxy settings – this means our work is done as as soon as we add the proxy into our backup infrastructure it will now be available to all the jobs.  That said, if you have changed settings (like me) to specify certain groups of proxies for certain jobs then we will have to edit each and every job in order to have it utilize our new proxy.   This isn’t a hard process but can present some challenges as it pertains to time depending on how many jobs you have.  I don’t have an extreme amount of jobs, maybe 10 or so, but I also don’t like doing the same thing over and over as it often leads to mistakes.

Enter PowerShell

So with all that said here’s a quick little Powershell script that you can utilize to add a new proxy to a listing of existing jobs.  As you can see I’ve chosen to add it to all of my jobs, but this can easily be modified to get only the jobs you want by passing some -Filter parameters to the Get-VBRJob cmdlet.  The script is pretty simple, taking only one parameter, the proxy to add (***NOTE*** you will need to go into Veeam B&R and configure this machine as a proxy within your Backup Infrastructure, that part isn’t automated), looping through all of my jobs, retrieving a list of existing proxies and adding the new one to that list, then applying the new list back to the job.  It does this for both the source proxies and the target proxies (as you can see with -Target).

Param ( [string]$proxyToAdd )

Add-PSSnapin VeeamPSSnapIn

1
2
3
4
5
6
7
8
9
10
11
12
13
$newProxy = Get-VBRVIProxy -Name $proxyToAdd
$jobs = Get-VBRJob
 
foreach ($job in $jobs)
{
$existingProxies = Get-VBRJobProxy -Job $job
$newProxyList = $existingProxies + $newProxy
Set-VBRJobProxy -Job $job -Proxy $newProxyList
 
$existingProxies = Get-VBRJobProxy -Job $job -Target
$newProxyList = $existingProxies + $newProxy
Set-VBRJobProxy -Job $job -Proxy $newProxyList -Target 
}

Simply save your script (I called it AddProxyToJobs.ps1) and run it as follows…

c:\scripts\AddProxyToVMs.ps1 newproxyname

There is absolutely no error checking within the script so by all means make sure you get the syntax right or you could end up with a slew of errors.  Either way, this is a nice way to add a new proxy to a list of jobs without having to manually edit every job.  And as I mention with every script I write if you have any better ways to accomplish this, or see any spots where I may have screwed up please let me know….

Using PowerShell to mass configure the new Veeam v9 features

veeamv9Veeam v9 is here and if you have already performed the upgrade you might be a bit anxious to start using some of the new features that came along with it.  In my case I’ve already gone ahead and done my due dilligence by enabling and configuring some of the new features on a few test backup/replication jobs and I’m ready to duplicate this to the rest of the environment – problem being I have A LOT of jobs to apply these to.  As always I look to automation to solve this issue for me.  One, it is way faster and two, it provides a consistent set of configuration (or errors) accross my jobs – making it far more easier to troubleshoot and change if need be.  Thanfully Veeam provides a set of PowerShell cmdlets that allows me to automate the configuration of some of these features.  So, if you are ready to go let’s have a look at a few of the new features within Veeam v9 and their corresponding PowerShell cmdlets.

Just a note – for each of these examples I’ve just posted the code to handle one object – but you could easily surround the blocks of code with a foreach() if you are looking to apply the configurations to many objects.  <- this is what I have done however it’s easier and much easier to read if I just insert code dealing with individual objects.

Enabling Per-VM file chains

First up is the Per-VM Backup File Chain introduced in v9.  In previous version of Veeam all of the VMs contained within a single job were also contained within single backup files – In the end we were left with some massive backup files sitting on our repositories.  Having a massive file laying around isn’t such a big deal, but when the time came where we were required to manage or move that file in any way it presented a few problems – it took a long time to move and activity surrounding that file need be disabled until we were done.   In the end we were left with a lot of waiting and no backups.  The v9 Per-VM Backup File chain fixes this – it allows us to store our backup files on a per-vm basis, leaving them much easier to manage, not too mention the headaches that are saved if corruption on our backup files occur.  Either way I wanted to enable this on a dozen or so of my repositories…

I say repository since that is where the per-VM Backup Chain is enabled – not on the job, not on the VM, on the actual Veeam Repository.  The process of doing so is pretty simple, get our repository, set a flag to true, and call back the saveOptions() function – as follows…

$repo = Get-VBRBackupRepository -Name "Name of repository"
 
$repo.Options.OneBackupFilePerVm = $true
 
$repo.saveOptions()

New Mount Server

In previous versions of Veeam before v9 certain restore operations required mounting backups to a Veeam backup server, which when dealing with remote sites could of resulted in increased bandwidth usage depending on how you had configured your environment.  v9 gives us the ability to designated any windows machine as a mount server.  The mount server can then be used as a mount point to perform file-level recovery operations, allowing the bandwidth to stay local at the remote site.

As with the Per-VM backup chains, mount servers are enabled on a repository level.  In my cases I wanted my repositories and mount servers to be one of the same – in order to do that I simply get the remote repository, then call Set-VBRBackupRepository passing it my mount host name and turning on the vPowerNFS flag as shown below…

$repo = Get-VBRBackupRepository -Name "Name of repository"
 
$repo | Set-VBRBackupRepository -MountHost (Get-VBRServer "Name of desired Mount Host") -EnableVPowerNFS

Guest Interaction Proxy

Another new ROBO enhancing feature in v9 is the ability to specify a guest interaction proxy.  Previously the Veeam Backup and Replication server handled deploying runtime processes into the VMs to facilitate different parts of the backup and replication jobs – in v9, we can now designate servers that may be onsite to do this – This helps in a couple of ways – first, this helps reduce traffic traversing our WAN and secondly, sometimes backup servers were isolated from the VMs they were backing up, prevening certain actions from even being able to take place.  Anyways, the Guest Interaction Proxy is a per-job setting and is setup within the VSS settings of the job.  In my cases I just needed to flip the AutoDetect to $true in order to get Veeam to select the proper GIP.

$job = Get-VBRJob -Name "Job Name"
 
$vssoptions = $job.GetVssOptions()
 
$vssoptions.GuestProxyAutoDetect = $True
 
$job.setVssOptions($vssoptions)

Enable deleted file blocks

Veeam v9 has introduced many data reduction technologies within their application in order to help us save space and more efficiently manage all of our backup capacity.  The first technique we will look at is the ability to not backup deleted file blocks.  This can be enabled on your existing backup jobs by setting the DirtyBlocksNullingEnabled flag as follows.

$job = Get-VBRJob -Name "Job Name"
 
$joboptions = $job.getOptions()
 
$joboptions.ViSourceOptions.DirtyBlocksNullingEnabled = $True
 
$job.setOptions($jobOptions)

Exluding certain folders/files

Another space saving feature inside of v9 is the ability to exclude or include certain files or folders contained with the VMs – think about Temp directories – under normal circumstances we don’t need them so why take up all that capacity backing them up.  We can set this up by first setting the BackupScope property – this can be set to exclude folders (ExcludeSpecifiedFolders), only include folders(IncludeSpecifiedFolders) or simply backup everything(Everything).  Depending on the setting of the BackupScope we then set the GuestFSExcludeOptions or the GuestFSIncludeOptions with an array of strings pointing to the desired folders – finally, saving our job options as follows…

$job = Get-VBRJob -Name "Job Name"
 
$jobobject = Get-VBRJobObject -Job $job -Name "VM Name"
 
$vssoptions = Get-VBRJobObjectVssOptions -ObjectInJob $jobobject
 
$vssoptions.GuestFSExcludeOptions.BackupScope = "ExcludeSpecifiedFolders"
 
$vssoptions.GuestFSExcludeOptions.ExcludeList = "C:\folder","D:\folder","c:\test\folder"
 
$jobobject.SetVssOptions($vssoptions)

Storage-Level Corruption Guard on Production Backup Jobs (not just backup copy)

SureBackup does a great job at ensuring our VMs will boot however they may be certain portions of our data that can become corrupt that can actually pass a SureBackup test.  To help alleviate this, Veeam has introduced something called Storage-Level Corruption Guard (SLCG) to periodically identify and fix certain storage issues.  SLCG has actually been around in previous versions, but only available for Backup Copy jobs.  In v9 it can now be enabled on our production backup jobs, giving us one more piece of mind when the need to restore comes along.   This is enabled by first enabling the EnableRechek (yes, it’s spelled like that) flag, then setting a schedule (Daily/Monthly) and few other options and finally saving our options…  Below we’ve set a job up to perform SLCG on Fridays.

$job = Get-VBRJob -Name "Job Name"
 
$joboptions = $job.getOptions()
 
$joboptions.GenerationPolicy.EnableRechek = $True
 
$joboptions.GenerationPolicy.RecheckScheduleKind = "Daily"
 
$joboptions.GenerationPolicy.RecheckDays = "Friday"
 
$job.setOptions($jobOptions)

Defragment and compact full backup file – on production backups not just backup copy

Over time our full backup files can become bloated and heavily fragmented – when we delete a VM for example, the full backup might still be holding onto certain data that was in that VM.  Normally we could take an active full backup in order to help purge this data, but as we all know that requires us to affect production and use up valuable resources.  To help alleviate this v9 has introduced the ability to defragment and compact our full backups on a schedule.  This is done very similar to that of SLGC, getting the VSS options of a job and setting the schedule.  Below we enable our defrag to run on Fridays.

$job = Get-VBRJob -Name "Job Name"
 
$joboptions = $job.getOptions()
 
$joboptions.GenerationPolicy.EnableCompactFull = $True
 
$joboptions.GenerationPolicy.CompactFullBackuScheduleKind = "Daily"
 
$joboptions.GenerationPolicy.CompactFullBackupDays= "Friday"
 
$job.setOptions($jobOptions)

So there you have it – a little bit of automation for those that may have to update numerious jobs to fully take advantage of some of the features Veeam v9 has introduced.  As always please feel free to reach out if any of this isn’t working, or you have any comments, questions, concerns, rants, etc.  Thanks for reading!

Quickfix – Mass editing Veeam VM Attribute settings with PowerShell

Hi – I’m Mike – you may remember me from such blogs as, oh this one last year!  I know, it’s been a while since I’ve written anything, moreso, published anything.  It’s been a hectic couple of months and I’ve got a ton of drafts sitting just waiting to be touched up so stay tuned – I promise to be a bit more visible in the next coming weeks!  Anyways I called this a quick fix so I guess I should get to the point…

VeeamVMAttributeThere is a setting within the notification options inside Veeam Backup and Replication that allows you to write some details to a processed VMs annotations section.  Now I’ve never had a use for this…until now.  I was reporting on a wide variety of things in regards to specific VMs, and having the last successful backup was one of the things that I wanted to report on.  This was within an asp.net application, and dropping to PowerShell and loading up the Veeam cmdlets was something I just didn’t feel like coding within the application.  Also, accessing the Veeam REST API was out of the question seeing as these VMs were being processed by Veeam Standard lisenses – REST is only available within Veeam’s Enterprise Plus offering.  Since I was already connected the vSphere API to gather a bunch of information such as CPU and Memory usage I thought by having Veeam write the last successful backup to the Notes field within vSphere would be a good alternative, as I could just gather that info with the same API calls I was making to vSphere.

One problem presented itself – I had roughly 50 different jobs within the Veeam Backup and Replication deployment that needed to be updated in order to make it happen.  Naturally I looked to automation for the solution – and I found it with PowerShell.  In order to completely enable this feature, and turn off the ‘append’ option you have to touch two different spots within Veeam PowerShell; one, the Set-VBRJobAdvancedViOptions (to enable the actual writing to VM Attributes) and another by setting the VmNotesAppend flag within the ViSourceOptions object.

Its a simple script, and honestly you probably don’t care, or didn’t read any of the introduction stuff above – I know how things work, you came here for the script – well, here its…

$jobs = Get-VBRJob

foreach ($job in $jobs)
{
  $job | Set-VBRJobAdvancedViOptions -SetResultsToVmAttribute $True
  $joboptions = $job.GetOptions()
  $joboptions.ViSourceOptions.VmNotesAppend = $False 
  $job.SetOptions($joboptions)
}

There you have it!  A simple script that peels through all the VBR Jobs within a console, enables the writing to the VM Attribute, and disables the append flag.  An easy script but useful none the less 🙂

Friday Shorts – #VDM30in30 , Post Power On SRM commands, vExpert and more

Here we go, my latest addition of Friday Shorts – a collection of blogs and news that I’ve found interesting over the last few weeks!  In true Canadian fashion I apologize for the list being so short this week 🙂

#VDM30in30

2151-1-virtual-design-master-150x150I’m not sure how well known it is but there is a fun little challenge happening right now coming from the Virtual Design Master folks.  It’s called #VDM30in30.  The concept is simple – 30 blogs in 30 days throughout the month of November.  Just write and syndicate out to Twitter with the #VDM30in30 hashtag.  I watched last year and it actually generated a lot of great content – content that stretched beyond peoples main focus – blogs about career challenges, office setups, etc.. It’s nice to see another side of some of the great bloggers that are participating.  Speaking of participating I asked Eric Wright (@discoposse) if it was too late to join – his answer, it’s never too late 🙂  So, let’s consider this Post #1 in my list!  Honestly, I don’t think I’m going to hit 30 blogs – I’d be scraping the bottom of the barrel for topics and would end up with some kind of crazy Carpal Tunnel – but I’ll do my best to get as many as I can out – #VDM5in30 ???

Using PowerCLI to Automate SRM

I don’t (I mean never) have used SRM but when it comes to automation, be it through API’s or PowerShell I’m always interested.  Conrad Ramos (@vnoob) has a great article about how to automate some post power on commands within SRM using PowerShell and PowerCLI.  And let’s face it, if you are ever in a situation where your have implemented a fail over within SRM you probably want to utilize all of the automation you can since you will most likely have a number of crazed employees standing behind you panicking 🙂

Oooh Top vBlogs is coming soon!

Every year Eric Siebert spends a tireless amount of time putting together his Top vBlog voting!  Although in the end wherever I end up in the standings really doesn’t affect my writing or post frequency it’s still a fun little way of placing a number on this blog, as well as ranking my favorite bloggers, writers, and podcasters out there.  It appears he has already begun the plannings for the 2016 challenge so all I ask is as you are perusing around through your feed readers, syndication, and google results just take note of who’s blog that is – their name may very well be on the list – Give them a little love when you hit the polls!

vExpert 2016 applications are open!

For those who haven’t heard the applications for vExpert 2016 are now open!  Current vExperts are able to quickly apply by fillling out a fast-track application and those that are looking to apply for the first time will need to fill out an application that is slightly longer!  So, if you have started writing, blogging, or evangelizing in any way I encourage you to apply!  It won’t take long and hey, who knows, you might get in.  The vExperts are a humble bunch and always shrug off the benefits but in all honesty there are some nice perks that come along with the designation, lisenses, PluralSight subscriptions and a lot of great swag provided by a lot of vendors.  Just apply – it can’t hurt!

 

Scheduling Veeam Backup Free Edition backups

veeamlogoAs you might be aware Veeam has released Update 2 for it’s Backup and Replication software.  With that comes a slew of updates, integration with Endpoint Backup, vSphere 6 support, features, enhancements, bug fixes – you know the usual suspects that you might find inside of an update pack – you can see them all in the release notes here.  Speaking of release notes – it’s always a good idea to read completely through them before even considering an upgrade – not just to find any known problems or gotchya’s, but at times, mostly all the time you will find a feature or change to the product that isn’t marketed and publisized as much as the rest.  Now Veeam B&R update 2 is largely about Endpoint Backup integration and support for vSphere 6.0 –which is awesome – but as I was doing my once over of the release notes I noticed this….

veeamfree

Veeam has a long history of releasing so-called Freemium products – giving a way a scaled back portion of their complete solution absolutely free, while offering a paid license for those looking for enterprise features.  Veeam Backup Free Edition is exactly this – allowing administrators to create full backups of their VMs using VeeamZip technologies – absolutely free.

The one caveat to this was you were never able to schedule your VeeamZips – so creating a backup was something that had to be manually triggered.  I’m sure many of you (as have I) have tried – only to see the infamous “License is not installed” message when running the Start-VBRZip PowerShell cmdlet.  Well, as of update 2 you can kiss that message goodbye and begin scheduling that cmdlet to your hearts delight.

Start-VBRZip

This is a relatively easy process but in the interest of completeness let’s go over it anyways.  First up we need to create a PowerShell script that will execute the Start-VBRZip cmdlet, which inturn VeeamZips our VM.  The script I used is below…

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
Param(
  [Parameter(Mandatory=$true)][string]$VM,
  [Parameter(Mandatory=$true)][string]$Destination,
  [Parameter(Mandatory=$true)][ValidateSet(0,4,5,6,9)][int]$Compression,
  [bool]$DisableQuiesce=$true,
  [Parameter(Mandatory=$true)][ValidateSet("Never","Tonight","TomorrowNight","In3days","In1Week","In2Weeks","In1Month")][string]$Autodelete
)
#Load Veeam Toolkit
& "C:\Program Files\Veeam\Backup and Replication\Backup\Initialize-VeeamToolkit.ps1"
#Validate any parameters
$vmentity = Find-VBRViEntity -Name $VM 
if ($vmentity -eq $null)
{
  Write-Host "VM: $VM not found" -ForegroundColor "red"
  exit
}
if (-Not (Test-Path $Destination))
{
  Write-Host "Destination: $vmname not valid" -ForegroundColor "red"
  exit
}
if ($DisableQuiesce -eq $true)
{
    Start-VBRZip -Entity $vmentity -Folder $destination -Compression $Compression -AutoDelete $Autodelete -DisableQuiesce
}
else
{
    Start-VBRZip -Entity $vmentity -Folder $destination -Compression $Compression -AutoDelete $Autodelete
}

A couple things about the script – you can see that it takes 5 parameters; the VM to backup, the destination to back it up to, the level of compressions to apply, whether or not to queiesce the VM and the auto-delete policy to apply to the backup.  From there we simply load the Veeam toolkit, do a little error checking and then initiate the backup with Start-VBRZip.  Pretty simple stuff – you can go ahead and try it by saving the script and calling it like so…

VeeamZip.ps1 –VM “VM1” –Destination “E:\backups” –AutoDelete “Never” –Compression 5 –DisableQuiesce $false

Scheduling the script

scheduledtaskPick your poison when it comes to scheduling this script to run – I’ve chose the standard Windows Task Scheduler to do the job. So go ahead and create a scheduled task with whatever schedule you like within Windows –  The only really tricky part is passing the arguments to the script – the way I have done it is by selecting ‘Start a program’ as my action, passing the path to PowerShell.exe in my program script, then enclosing my string arguments in single quotes, and the complete arguments string in double quotes like below

Program/script: C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe

Add arguments: “c:\VeeamZip.ps1 –VM ‘VM1’ –Destination ‘E:\backups’ –AutoDelete ‘Never’ –Compression 5 –DisableQuiesce $false”

From there it’s a matter of creating as many scheduled tasks as you have VMs you want backed up, or modifying the script to backup all your VMs – Either way, as you can see, the Veeam Backup Free edition has received a nice little feature buried within the Update 2 release notes!!!!

Migrating workflows from one vCO server to another!

Although vCenter Orchestrator does offer an upgrade path for their appliances there are times where I have found myself needing to deploy a new one and migrate all of my configuration and workflows to it.  vCO has some great built in workflows that can configure itself, but nothing that really deals with workflows.  Sure, you can export and import workflows one at time using the client, which may be ok if you have 10 workflows, but what if you have 20, 40, 100 that you need to migrate.  That could get pretty monotonous.

The shear energy of a mouse click.

That’s right – who wants to exert all that energy of clicking on the mouse to get these workflows migrated when we can just orchestrate or automate it – after all, this is vCenter Orchestrator we are talking about.  vCO has a REST plugin that allows us to create workflows around any application that offers one, but did you know that vCO also has it’s own REST API available for us to use?  So that’s where I started with my task of migrating workflows and by the time I got to writing this post it truly ended up being a community effort.

“Steal from the best”

This was a quote that I seen on one of Alan Renouf’s slides during a VMworld presentation on PowerCLI.  “Steal from the best”, “Don’t re-invent the wheel” – sayings that have resonated with me for my entire career – Why re-do something if it has already been done.  So when I set out on this small project I ended up using two main sources; This post by William Lam on VirtuallyGhetto on how to use curl to export a single vCO workflow and this forum reply by igaydajiev who “showed me the light” on how to import a workflow back in!  Without these two guys I wouldn’t have been able to do any of this.

Enough already let’s get to it!

So I chose to go the PowerShell route to accomplish this as I’m not too familiar with the REST plugin for vCO.  As well, I am targeting only a certain category – so basically what the script does is take in the following parameters

  • OldvCOServer – the old vCO appliance
  • OldvCOUser/OldvCOPass – credentials for the old appliance
  • OldvCOCategory – Category Name to export workflows from
  • TempFolder – Location to store the exported workflows
  • NewvCOServer – The new vCO appliance
  • NewvCOUser/NewvCOPass – credentials for the new appliance
  • NewvCOCategory – Category name on new server where you would like the worfkflows imported.

As far as an explanation I’ll just let you follow the code and figure it out.  It’s basically broke into two different sections; the export and the import.  During the import routine there is a little bit of crazy wonky code that gets the ID of the targeted category.  This is the only way I could figure out how to get it and I’m sure there is a way more efficient way of doing so, but for now, this will have to do.  Anyways, the script is shown below and is downloadable here.  

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
Param(
  [string]$oldvCOServer="localhost",
  [string]$oldvCOUser="vcoadmin",
  [string]$oldvCOPass="vcoadmin",
  [string]$oldvCOCategory="MyOldWorkflows",
  [string]$newvCOServer="localhost",
  [string]$newvCOUser="vcoadmin",
  [string]$newvCOPass="vcoadmin",
  [string]$newvCOCategory="MyWorkflows",
  [string]$TempFolder
)
 
# vCO Port
$vcoPort="8281"
 
# Type to handle self-signed certificates
add-type @"
    using System.Net;
    using System.Security.Cryptography.X509Certificates;
    public class TrustAllCertsPolicy : ICertificatePolicy {
        public bool CheckValidationResult(
            ServicePoint srvPoint, X509Certificate certificate,
            WebRequest request, int certificateProblem) {
            return true;
        }
    }
"@
[System.Net.ServicePointManager]::CertificatePolicy = New-Object TrustAllCertsPolicy
[byte[]]$CRLF = 13, 10
 
function Get-AsciiBytes([String] $str) {
    return [System.Text.Encoding]::ASCII.GetBytes($str)            
}
 
function ConvertTo-Base64($string) {
   $bytes  = [System.Text.Encoding]::UTF8.GetBytes($string);
   $encoded = [System.Convert]::ToBase64String($bytes); 
 
   return $encoded;
}
 
clear
##############  EXPORT OLD WORKFLOWS ##############################
Write-Host "Beginning Export Routine" -ForegroundColor Black -BackgroundColor Yellow
Write-Host ""
# build uri
$uri = "https://"+$oldvCOServer+":"+$vCOPort+"/vco/api/workflows/?conditions=categoryName="+$oldvCOCategory
 
# Authentication token or old server
$token = ConvertTo-Base64("$($oldvCOUser):$($oldvCOPass)");
$auth = "Basic $($token)";
$header = @{"Authorization"= $auth; };   
 
#execute API call
$workflows = Invoke-RestMethod -URi $uri -Method Get -ContentType "application/xml" -Headers $header
Write-Host "Exporting $($workflows.total) workflows from $oldvCOCategory"
Write-Host "-------------------------------------------------------------------------------------------"
#loop through each workflow and export to TempFolder
foreach ($href in $workflows.link.href)
{
    #retrieve information about the specific workflow
    $header = @{"Authorization"= $auth; };   
    $workflow = Invoke-RestMethod -URi $href -Method Get -ContentType "application/xml" -Headers $header
 
    #replace all spaces in workflow name
    $workflowname = [System.Text.RegularExpressions.Regex]::Replace($($workflow.name),"[^1-9a-zA-Z_]","")
    $filename = $TempFolder + $workflowname + ".workflow"
    # setup new header
    $header = @{"Authorization"= $auth;
                "Accept"="application/zip"; }; 
    Write-Host "Exporting $($workflow.name) to $filename - " -NoNewline
    Invoke-RestMethod -URi $href -Method Get -ContentType "application/xml" -Headers $header -OutFile $filename
    Write-Host "Done" -ForegroundColor Green  
}
Write-Host ""
Write-Host "Export Routine Complete" -ForegroundColor Black -BackgroundColor Yellow
##################################################################
 
##############  IMPORT WORKFLOWS ##############################
Write-Host ""
Write-Host ""
Write-Host "Import Routines to new server" -ForegroundColor Black -BackgroundColor Yellow
Write-Host ""
 
#Generate auth for new vCO Server
$token = ConvertTo-Base64("$($newvCOUser):$($newVCOPass)");
$auth = "Basic $($token)";
 
#Get Category ID
$header = @{"Authorization"= $auth; };   
$uri = "https://"+$newvCOServer+":"+$vCOPort+"/vco/api/categories/"
$categories = Invoke-RestMethod -URi $uri -Method Get -Headers $header -ContentType "application/xml" 
foreach ($att in $categories.link)
{
    if ($att.attributes.value -eq "HPEDSB")
    {
        foreach ($newatt in $att.attributes )
        {
            if ($newatt.name -eq "id")
            {
                $categoryID = $newatt.value
            }
        }
    }
}
 
$impUrl = "https://$($newvCOServer):$($vcoPort)/vco/api/workflows?categoryId=$($categoryId)&overwrite=true";
$header = @{"Authorization"= $auth;
            "Accept"= "application/zip";
            "Accept-Encoding"= "gzip,deflate,sdch";};    
 
$workflows = Get-ChildItem $TempFolder -Filter *.workflow
Write-Host "Importing $($workflows.count) workflows to $newvCOCategory"
Write-Host "-------------------------------------------------------------------------------------------"
foreach ($workflow in $workflows)
{
    Write-Host "Importing $($workflow.name) - " -NoNewline
    $body = New-Object System.IO.MemoryStream
    $boundary = [Guid]::NewGuid().ToString().Replace(&#39;-&#39;,&#39;&#39;)
    $ContentType = &#39;multipart/form-data; boundary=&#39; + $boundary
    $b2 = Get-AsciiBytes (&#39;--&#39; + $boundary)
    $body.Write($b2, 0, $b2.Length)
    $body.Write($CRLF, 0, $CRLF.Length)           
    $b = (Get-AsciiBytes (&#39;Content-Disposition: form-data; name="categoryId"&#39;))
    $body.Write($b, 0, $b.Length)
    $body.Write($CRLF, 0, $CRLF.Length)
    $body.Write($CRLF, 0, $CRLF.Length)
    $b = (Get-AsciiBytes $categoryId)
    $body.Write($b, 0, $b.Length)
    $body.Write($CRLF, 0, $CRLF.Length)
    $body.Write($b2, 0, $b2.Length)
    $body.Write($CRLF, 0, $CRLF.Length)     
    $b = (Get-AsciiBytes (&#39;Content-Disposition: form-data; name="file"; filename="$($workflow.Name)";&#39;))
    $body.Write($b, 0, $b.Length)
    $body.Write($CRLF, 0, $CRLF.Length)            
    $b = (Get-AsciiBytes &#39;Content-Type:application/octet-stream&#39;)
    $body.Write($b, 0, $b.Length)
    $body.Write($CRLF, 0, $CRLF.Length)
    $body.Write($CRLF, 0, $CRLF.Length)
    $b = [System.IO.File]::ReadAllBytes($workflow.FullName)
    $body.Write($b, 0, $b.Length)
    $body.Write($CRLF, 0, $CRLF.Length)
    $body.Write($b2, 0, $b2.Length)
    $b = (Get-AsciiBytes &#39;--&#39;);
    $body.Write($b, 0, $b.Length);
    $body.Write($CRLF, 0, $CRLF.Length);
 
    $header = @{"Authorization"= $auth;
            "Accept"= "application/zip";
            "Accept-Encoding"= "gzip,deflate,sdch";};    
    Invoke-RestMethod -Method POST -Uri $impUrl -ContentType $ContentType -Body $body.ToArray() -Headers $header
    Write-Host "Done" -foregroundcolor Green
}
 
Write-Host ""
Write-Host "Import Routine complete" -ForegroundColor Black -BackgroundColor Yellow
##################################################################

Friday Shorts – Log Insight, @vNoob’s tail, ViPR, Best Practices

Tigers love pepper.  They hate cinnamon.  – Alan Garner (Zach Galifianakis) from The Hangover

It's been a while since the last Friday Shorts – no excuses – I've been slacking 🙂  Whatever – here it is!

Log Insight jumps on the free fundamentals training bandwagon

VMW-BXSHT-vCNTR-LOG-INSIGHT_eStoreI'm certainly a huge fan of the 'fundamental' self-paced training that VMware has been offering for free.  In fact, if you go and have a look at the Top Free Courses on mylearn you will see that the fundamental series occupies the complete top 10.  Well, another on you can now add to that list is VMware Log Insight Fundamentals.  This course will give you a great introduction and overview as to what VMware Log Insight is as well as go through some installation, configuration and analysis.  Certainly a great way to get introduced to the product – oh, and for free!

vNoob Guide to Getting Tail

tailI tried to think of a catchy title here but I couldn't think of anything Conrad hasn't already said in his latest post vNoob Guide to Getting Tail.  Now I have no idea how well Conrad does with the ladies, but he certainly gives a great overview on how to use the 'tail' command, not only in Linux but in Powershell as well!!   This is most certainly not your average man page so get ready to be amused by some of Conrad's wittiness!!!  Good job!

Best Practices are only best practices until they are not!

reevaluateHere's a great example on the vSphere Blog on why you always need to re-evaluate any settings or best practices you have implemented in the past.  The fact of the matter is, with every new release comes new best practices – and things that you think you might not need to revisit you do!  Things like your power savings settings in the BIOS no longer calling for High Performance but now being recommended to be set at 'OS Controlled'.

Feel like testing out EMC's ViPR in your lab?  Go ahead!

EMC-LogoEMC has opened the flood gates for anyone to go and grab ViPR for use in a non-production environment.  That's right, if you feel like testing out EMC's take on software defined storage just head on over here, grab a copy and take it for a test drive.  I've often blogged about other companies dishing out NFR licenses for their products and how I think it is a great idea – it gives people the chance to get these bits into their labs and test them out without all of the salesy webinars, conference calls, and pestering emails :).   No matter what the play of EMC is here, I intend to get my hands on this so stay tuned!

Kerberos authentication for the PowerShell plugin in vCO 5.5

1 The ability to have vCO kick off PowerShell scripts is pretty awesome!  And the fact that you can kick these off contextually inside of the vSphere Web Client is even more awesome!  Even more awesome than that, yes, that’s a lot of awesome is the new features offered with vCenter Orchestrator 5.5 – So, I’ve taken the plunge on one of my environments and upgraded.  Since then I’ve been slowly migrating workflows over – one of which utilized the PowerShell plug-in.  Now, since the appliance mode of vCO requires you to do a rip and replace rather than an upgrade (because I’m using the embedded database) I had to reinstall the PS plugin, therefore forcing me to reconfigure the Kerberos settings on vCO.   During this I realized that things are a little bit different than when I first blogged about vCO and PowerShell here.  Below is how I got it to work…

First up is the WinRM setup on your PowerShell host.  This process  hasn’t changed from 5.1, however I’ll still include the steps and commands that need to be run below.  Remember these are to be executed on the Windows box that you wish to run the PowerShell script from.

  • To create a winrm listener and open any required firewall ports
  • winrm quickconfig
  • To enable kerberos authentication
  • winrm set winrm/config/service/auth @{Kerberos=”true”}
  • Allow transfer of unencrypted data
  • winrm set winrm/config/service @{AllowUnencrypted=”true”}
  • Up the max memory per shell – I needed to do this to get things working
  • winrm set winrm/config/winrs @{MaxMemoryPerShellMB=”2048″}

No on to the krb5.conf file – this is where things get a bit different.  In vCO 5.1 we were required to edit the krb5.conf file located in /opt/vmo/jre/lib/security/ – well, if you go looking for that directory on 5.5 you won’t find it.  Instead, we need to create our krb5.conf file in /usr/java/jre-vmware/lib/security/  As far as what goes in the file it is the same and is listed below…(obviosoly substituting your own domain for lab.local and your own dc for the kdc definition).

[libdefaults]
default_realm = LAB.LOCAL
udp_preferences_limit = 1   [realms]
LAB.LOCAL = {
kdc = dc.LAB.LOCAL
default_domain = LAB.LOCAL
}   [domain_realms]
.lab.local=LAB.LOCAL
lab.local=LAB.LOCAL

After you have saved the file in the proper directory we need to modify the permissions.  The following line should get you the proper permissions to get everything working.

chmod 644 /usr/java/jre-vmware/lib/security/krb5.conf

Just a few other notes!  You might want to modify your /etc/hosts file and be sure that you are able to resolve the fqdn’s of both your dc and the PowerShell host you plan to use.  Also, when adding the PowerShell host be sure to select Kerberos as your authentication type and enter in your credentials using the ‘user@domain.com’ format.

For now, that should get you automating like a champ!