Category Archives: Uncategorized

Running free VeeamZip directly from the vSphere Web Client

veam_thumb.pngThere are a lot of times I find myself needing to take a one-off backup job of a VM – prior to software upgrades or patching I always like to take a backup of the affected VM(s) in the event that, well, you know, I mangle things.  VeeamZip is great for this – it allows me to process a quick backup of my VM that is separate from its’ normal backup and replication routines.  Since I deal in an environment that running paid Veeam licenses I have access to the Veeam Plug-in for the vSphere Web Client – and this plug-in does exactly what the title of this blog post is – it allows us to perform VeeamZips of our VMs without having to leave the vSphere Web Client and/or log into our Veeam Backup and Replication console.

What if I’m using Veeam Backup and Replication FREE?

So this is all great for me, but I got to thinking – What if I wasn’t running a paid version of Veeam Backup?  What if I was simply running the free version – this doesn’t come with Enterprise Manager, therefore it doesn’t come with a means of getting the Veeam Backup and Replication Web Client plug-in installed – therefore no VeeamZip from the Web Client right? – Wrong!  Ever since Veeam Backup and Replication v8 U2 came out they have been including PowerShell cmdlets around the VeeamZip functionality.  I wrote about how to use it last year in Scheduling Veeam Free Edition Backups.  Well, since we have PowerShell that means we can use vRealize Orchestrator to build a workflow around it – and we have the ability to execute workflows directly from within the vSphere Web Client – so without ado, Running the free VeeamZip functionality directly from the vSphere Web Client.

First up the script

I didn’t get too elaborate with the script as you can see below.  This is simply a handful lines that take in a few parameters; the VM to backup, the destination to store the backup, and the retention, or autodeletion of the backup.

1
2
3
4
5
6
7
8
9
10
11
12
Param(
[Parameter(Mandatory=$true)][string]$VM,
[Parameter(Mandatory=$true)][string]$Destination,
[Parameter(Mandatory=$true)][ValidateSet("Never","Tonight","TomorrowNight","In3days","In1Week","In2Weeks","In1Month")][string]$Autodelete
)
#Load Veeam Toolkit
& "C:\Program Files\Veeam\Backup and Replication\Backup\Initialize-VeeamToolkit.ps1"
#Get the VM Veeam Entity.
$vmentity = Find-VBRViEntity -Name $VM
 
#VeeamZip it!
Start-VBRZip -Entity $vmentity -Folder $destination -AutoDelete $Autodelete -DisableQuiesce

That’s it for the script – simple right – feel free to take this and add whatever you seem fit to suit your needs 🙂

The Orchestrator Configuration

Before we get to creating our workflow there are a few things we need to do within orchestrator, mainly adding our server that hosts our Veeam Free instance as a PowerShell host within vRO.  But even before we run the ‘Add a PowerShell Host’ workflow we need to run a few winrm commands on the Veeam Free instance.  I have a complete post about setting up a PowerShell host here, but will include the commands you need to run below for quick reference.

First up, on the Veeam server run the following in a command shell…

  • winrm quickconfig
  • winrm set winrm/config/service/auth @{Kerberos=”true”}
  • winrm set winrm/config/service @{AllowUnencrypted=”true”}
  • winrm set winrm/config/winrs @{MaxMemoryPerShellMB=”2048″}

Next, from within vRO (as shown below) we can run the the “Add a PowerShell host” workflow…

PowerShellHost1

As you can see my Veeam Server is actually the same as my vCenter server – don’t recommend doing this but hey it’s a small lab!  Just be sure to use the FQDN of your Veeam Server for Host/IP column.

PowerShellHost2

Ensure that the remote host type is WinRM, and that the Authentication method is set to Kerberos.

PowerShellHost3

And be sure that we are passing our username in the ‘username@domain’ format, along with ‘Shared Session’ for the session mode.  Once you are done go ahead and click ‘Submit’.  If everything goes as planned your Veeam Backup and Replication server should be added as a PowerShell host within vRealize Orchestrator.

And now, the workflow!

Finally we can get to actually building our workflow.  If you remember our script we take in three parameters; VM, Desitination and AutoDelete – so we will mimic the same with our workflow, only calling them Input Parameters within vRO (shown below)

workflow1

Now since we will be using the built-in Powershell workflow ‘Invoke an External Script’ we will also need to have some workflow attributes setup in order to pass to that workflow.  Below you can see how I’ve setup mine…

workflow2

Your configuration may vary a little from this one, but as you can see we simply add a PowerShell host attribute and map it to our newly added host, as well assign the ScriptPath attribute to the representation of where we saved our little VeeamZip script earlier.  The arguments attribute can remain empty as we will only use this to build the arguments string to pass to the script.

workflow3

The first element we want to add to our workflow schema is a Scriptable task – go ahead and drag that over into your workflow.  This is where we will create our arguments string.

workflow4

As far as what goes into the scripting you can see I’ve simply brought in the arguments attribute, along with our three input parameters and simply chained them together into one string (arguments = ‘”‘+VM.name+'” “‘+Destination+'” “‘+AutoDelete+'”‘;), then ensured that my arguments attribute was included in the output as well.

workflow5

Next drag the ‘Invoke an external script’ workflow into your schema (you can see I’ve renamed mine ‘Run VeeamZip’.  Ignore all of the prompts regarding the setup of parameters that pop up – the easiest way I like to do this is by editing the workflow (the pencil above it) and using the ‘Visual Binding’ tab as shown below.

workflow6

Simply drag and drop your in attributes to their corresponding in attributes on the external script workflow, along with mapping your output to output.  Easy Peasy!

At this point you can go ahead and save and close your workflow – we are done with Orchestrator.  If you want to run the workflow a few times to test from within vRO go ahead – but the point of this post was to run it from within the Web Client so let’s move on to that step.

vRO and vSphere Web Client

I love vRealize Orchestrator and I love the fact that I can contextually execute custom workflows from within the vSphere Web Client.  To do this you need to first register your vRO instance with vCenter – this should be automatically done for you depending on you set everything up – I’m not going to get into that configuration today.  To get to our context mappings we need to click on Home->vRealize Orchestrator.  With the vRO Home in context select the ‘Manage’ tab and then ‘Context Actions’.  We then are going to want to hit the little green + sign to add a new workflow context map.

webclientworkflow1

As far as the next steps they are pretty self explanatory – navigate through your vRO inventory to your workflow, click ‘Add’, and select ‘Virtual Machine’ from the types box.  This is what will allow us to right click on a VM and run our VeeamZip, passing the contextually selected VM to the workflow’s VM input parameter.  Click ‘OK’ and it’s time to VeeamZip!

Now when you want to run the workflow you can simply right click a VM and navigate to (in my case) All vRealize Orchestrator Actions->VeeamZipVM

execute1

As you can see our workflow will start, using our VM selected as the VM input, and simply prompt us for the destination and AutoDelete settings.

execute2

And there you have it!  We can now use the Free version of Veeam Backup and Replication to VeeamZip our VMs directly from within the vSphere Web Client.  In fact, our workflow will even show up within our vSphere tasks so we can monitor the status of the job.  Now, there is no error checking or anything like that…yet!  Let me know if you have any questions, concerns, etc… always happy to read the comments!

Friday Shorts – #vDM, New Web Client, Linux Cleanup, Betas and more…

Is it flowing? I like flowing, cascading hair. Thick lustrous hair is very important to me. Let me ask you this. If you stick your hand in the hair is it easy to get it out?

George Costanza – Seinfeld

Virtual Design Master 4 looking for sponsors

vdmlogoIf you have never checked out the Virtual Design Master challenge I suggest you stop reading this and head over to their site and peruse the last 3 season, then com back here of course…. Anyways, the online, reality based challenge is back for Season 4 and they are looking for sponsors to help provide prizes, swag, infrastructure, etc for the upcoming season!  So if you work for a vendor and want to get your brand attached to VDM4, follow this link to indicate your interest!  They are looking to get everything firmed up to have a July/August competition.

New HTML5 vSphere Web Client!

VMware LogoWhy VMware feels the need to change how the lightening fast, crazy responsive, highly reliable vSphere Web Client that is currently out there is beyond me, but they are…  I hope you can detect the sarcasm in that last sentence.  Anyways, they have been hard at work (re)developing the vSphere Web Client, removing it’s reliance on flash and flex and providing the same functionality through code based on HTML5.  I’ve not yet had a chance to check this out, but from the reactions on the blogosphere and Twitter I’d say that they are on the right track!  They are releasing the vSphere Web Client 6.5 as a Fling, allowing the product to get out into everyone’s hands before it’s integrated into a vSphere version.  If you have a chance go and check it out – it’s simply a virtual appliance that integrates with your current environment.

Getting Linux ready for a vSphere Template!

tuxcloudFellow VFD4 delegate Larry Smith recently posted in regards to cleaning up your Ubuntu templates!  It’s a great post that covers off a lot of things than you can do to ensure you have a clean, prepped instance of Ubuntu to use a template within your vSphere environment.  That said, he takes it one step further, scripting out the complete cleanup in bash – and in Ansible.  If you deal with Linux/Ubuntu templates I would definitly recommend heading over to Larry’s blog and applying some of this scripty goodness.

vSphere.next – Beta Time!

VMware has announced that the next version of vSphere will enter a (limited) public beta. If you feel like you have the time and are ready to provide the effort in providing feedback, submitting bugs, etc to VMware in regards to the next release of vSphere then you can head here and indicate your interest in being a part of the beta.  As far as I know not everyone will be accepted – careful consideration will be taken on who is chosen to participate as they want to ensure they are getting valuable feedback and discovering any gotchya’s in the product before releasing it to the masses!

ZertoCon – The Premier Business Continuity Conference

ZertoCON_Ads_125x125Zerto has been a long time sponsor of this blog so I thought I’d place a shoutout to them and what they have in the works this spring!  You can join Zerto and many others from May 23-25 in beautiful Boston for ZertoCon.  Lately we have seen a lot of these smaller vendors opting to have their own conferences – and honestly if you use their products they are a must for you to go!  The VM/EMC Worlds are a great venue, but honestly, these smaller, laser focused conferences are absolutely fabulous if you are looking to gain more knowledge around certain vendors and their ecosystems!  I encourage you to check it out and sign up if you have the chance to go!

Adding Veeam Proxies to jobs via Powershell

veeamlogoThere will come a time in every growing environment when you need to scale your Veeam Backup and Replication deployment to help keep up with ever increasing demands as it pertains to backing up all those new virtual machines.  Veeam itself has a couple of different deployment models when it comes to scaling – we can scale up – this is done by adding more CPU and RAM to our current proxies and increasing the number of maximum concurrent tasks that our proxies can process – a good rule of thumb for this one is dedicating a CPU per task, so 2 concurrent tasks = 2 CPU’s.  Another option when it comes to scaling Veeam is scale out, which is done by building and adding additional Veeam proxies into the fold.  Which one you chose is completely up to you however in my experience I’ve had a better experience by scaling out and adding more Veeam proxies into my infrastructure – why?  Not really sure, I just don’t like having more than 2 or 3 processes hitting any one proxy at the same time – just a preference really…

If we have accepted defaults when creating our backup/replication jobs they should be set to ‘Automatic selection’ as it pertains to our Backup Proxy settings – this means our work is done as as soon as we add the proxy into our backup infrastructure it will now be available to all the jobs.  That said, if you have changed settings (like me) to specify certain groups of proxies for certain jobs then we will have to edit each and every job in order to have it utilize our new proxy.   This isn’t a hard process but can present some challenges as it pertains to time depending on how many jobs you have.  I don’t have an extreme amount of jobs, maybe 10 or so, but I also don’t like doing the same thing over and over as it often leads to mistakes.

Enter PowerShell

So with all that said here’s a quick little Powershell script that you can utilize to add a new proxy to a listing of existing jobs.  As you can see I’ve chosen to add it to all of my jobs, but this can easily be modified to get only the jobs you want by passing some -Filter parameters to the Get-VBRJob cmdlet.  The script is pretty simple, taking only one parameter, the proxy to add (***NOTE*** you will need to go into Veeam B&R and configure this machine as a proxy within your Backup Infrastructure, that part isn’t automated), looping through all of my jobs, retrieving a list of existing proxies and adding the new one to that list, then applying the new list back to the job.  It does this for both the source proxies and the target proxies (as you can see with -Target).

Param ( [string]$proxyToAdd )

Add-PSSnapin VeeamPSSnapIn

1
2
3
4
5
6
7
8
9
10
11
12
13
$newProxy = Get-VBRVIProxy -Name $proxyToAdd
$jobs = Get-VBRJob
 
foreach ($job in $jobs)
{
$existingProxies = Get-VBRJobProxy -Job $job
$newProxyList = $existingProxies + $newProxy
Set-VBRJobProxy -Job $job -Proxy $newProxyList
 
$existingProxies = Get-VBRJobProxy -Job $job -Target
$newProxyList = $existingProxies + $newProxy
Set-VBRJobProxy -Job $job -Proxy $newProxyList -Target 
}

Simply save your script (I called it AddProxyToJobs.ps1) and run it as follows…

c:\scripts\AddProxyToVMs.ps1 newproxyname

There is absolutely no error checking within the script so by all means make sure you get the syntax right or you could end up with a slew of errors.  Either way, this is a nice way to add a new proxy to a list of jobs without having to manually edit every job.  And as I mention with every script I write if you have any better ways to accomplish this, or see any spots where I may have screwed up please let me know….

How the Friday Shorts posts come to be

I’m always looking for a way to automate things – whether it be in my work life, personal life, home life, or even my blog life – when I first started to do Friday Shorts, an initiative for me to share out some blogs and articles that sparked my interest it was a manual process.  There was a lot of “Who wrote that article again?, What site was that again?, Where’s that link I emailed?” going on in my head and even though it was only a handful of links it was still a lot of work.   Now I had already automated my sharing of links out to Twitter (needs an update BTW, oauth1 depreciated in Google Script now – another post) and for the most part I’ve found that the articles I chose for Friday Shorts are the same as the ones I share out.   So with all that said I set out to automate a process that would at the very least get a draft of a Friday Shorts article into this blog – and this is what I’ve come up with.

So in order to do this I’ve utilized four different services; Google Drive/Scripts, IFTTT, Delicious, and WordPress – each playing a key role in the automation.  So the process goes as follows

  • While perusing all the great blogs out there if I find one I feel needs to be included, I quickly use the Delicious toolbar button to create a public bookmark with the ‘FridayShorts’ tag.  Also, while using my RSS reader of choice (Digg) – if I ‘Digg’ a post, IFTTT runs a recipe to automatically create bookmarks of the post with the proper tag.
  • Another IFTTT recipe takes all of my Delicious ‘FridayShorts’ tagged bookmarks, and appends them to a Google Spreadsheet within my Drive account.
  • From there I have a WordPress plugin I’ve developed which essentially connects to a Google Script which parses a spreadsheet – allowing me to select which articles I’d like to include and finally creating a draft WordPress post (Friday Shorts) following a specific template.  The articles which I select are then updated (through the Google Script calls) in order to ensure they aren’t displayed the next time I go through the process.
  • I clean up the post, add some pictures, descriptions, links and whatever and publish it…

So with all that said let’s have a quick look at how each of the components are setup and configured.

IFTTT

If this then that is a great tool for automating almost anything within your life.  For the purposes of creating a Friday Shorts post I have two main recipes which are utilized…  In all honesty recipes on IFTTT are fairly simple to setup so I’ll simply just show the screenshots outlining what they do.

Digg to Delicious
Delicious to Google
DiggToDeliciousIFTTT DelciousToGoogleIFTTT

See – pretty simple to setup – at then end of it all we should be left with a spreadsheet similar to the following…

FridayShortsSpreadsheet

Google Script

Now we have the information that we want sitting inside a Google Spreadsheet – but before we get into the WordPress Plugin we first need to create a Google Script (script.google.com) containing all of the functions and methods which will actually do the work we request from WordPress.  To do so, go to script.google.com and select File-New Project.  This should open up a blank script for you to run with.

Sometimes it’s easier just to see the whole thing first so let’s just lay it out there for you – we can go through some specifics at the end of the script.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
// open up the spreadsheet and set the active sheet.
var files = DriveApp.getFilesByName('FridayShortsLinkCollector');
var file;
while (files.hasNext()) 
{
file = files.next();
Logger.log(file.getName());
}
 
var ss = SpreadsheetApp.open(file);
var sheet = ss.getSheets()[0];
 
// return specific link
function getLink(articleid)
{
var values = sheet.getDataRange().getValues();
var articles = [];
for(var i=0, iLen=values.length; i<iLen; i++) 
{
if(values[i][6] == articleid) 
{
var article = [];
article.push(values[i][6].toString());
article.push(values[i][1].toString());
article.push(values[i][4].toString());
article.push(values[i][0].toString());
article.push(values[i][5].toString());
articles.push(article);
}
}
return articles;
}
 
// return all unprocessed links
function getAllLinks2()
{
var values = sheet.getDataRange().getValues();
var articles = [];
for(var i=0, iLen=values.length; i<iLen; i++) 
{
if(values[i][5] == "0") 
{
var article = [];
article.push(values[i][6].toString());
article.push(values[i][1].toString());
article.push(values[i][4].toString());
article.push(values[i][0].toString());
article.push(values[i][5].toString());
articles.push(article);
}
}
return articles;
}
 
// mark specific link as processed
function markArticleProcessed(articleid)
{
values = sheet.getDataRange().getValues();
 
for(var i=0, iLen=values.length; i<iLen; i++) 
{
if (values[i][6] == articleid)
{
values[i][5] = "1";
}
}
sheet.getDataRange().setValues(values);
}

So, if you are having trouble with my great comments throughout the code (I know, one-line here and there :)) let me try and explain a few things.  First of all, Lines 1 through 11 simply find the Spreadsheet within my Google Drive containing the links, then set the active sheet to the first one.

From there things are a bit more simple – the getLink() function returns an array of one specific article, taking in the articleID (auto genereated number in the spreadsheet) as a paramater.  The getAllLinks2() function returns an array of all of the articles within the spreadsheet (so long as their 5th cell (or processed as I call it) contains a value of 0.  The markArticleProcessed() function takes in a specific articleID as a parameter, and simply changes it’s 5th cell to a 1 – meaning it has been processed by a Friday Shorts article.

GoogleDeployAPIThat’s it for Google code but there are a few other things that you will need to do in order to make your script available for use by the WordPress plugin.  First off, select ‘Publish->Deploy as API executable”  Be sure to select “New” under the version (you will need to do this upon making any changes to the code) and make note of the API ID – we will need this for the PHP calls in WordPress.  Go ahead and click ‘Update’ when ready!

Secondly we need to open up some API’s within Google in order to allow the script to access your Google Drive content.  This is done in the developers console.  The easiest way I’ve found to get to the place we need to be is by selecting Resources->Developer Console Project.  If you haven’t already you will have to give your project a name and save it.  Once that is done simply click the link shown to go the scripts associated project.

DevelopersConsoleLink

The two APIs that we will need to open up for this project are the Drive API and the Google Apps Script Execution API (as shown below).  From the Dashboard select ‘Enable APIs and get credentials like keys”.  From there you should immidiately see the Drive API, but may have to search for the Script Execution API.  Either way, in the end you need to have them enabled as shown below…

APIsEnabled

From here move down to the Credentials section.  We need to create credentials to allow WordPress to access our Google content.  To do so select ‘New Credentials’ and then ‘Oauth Client ID’ as the type.  When presented with the application type chose ‘Web Application’, give it a name, and set the proper redirect URLs.  This can get tricky and will certainly change given your setup but my working setup is shown below.

redirectURIs

As you can see I’ve entered quite a few redirect URIs – not sure if I need them all but it works this way.  Also, make note of your Client ID and Secret – put it in the same place as your Script/API Id as we will need all of this for the WordPress Plugin.  For now, we are done with Google 🙂

The WordPress Plugin

I don’t want to go too deep into the specifics of how to create WordPress Plugins – partly because there is a lot too cover – too much for a this post, oh, and partly because I have no idea about 99% of it.  I simply fiddled until I got what I needed to work.  Again, let me simply just lay out some code and explain what it does – I’ll leave the fine details of the structure of WordPress Plugins to someone else.

folderstructure

As for my plugin there are really only 5 components to it..

  • The Google PHP SDK – download it here
  • fridayshorts.php – this is the main plugin page
  • fridayshorts-functions.php – this is a page containing all of the function calls to the Google Script
  • options.php – this page contains a means to setup the options for the plugin (the Script ID, Client ID and Client Secret)
  • js – some Javascript for checking boxes and stuff 🙂

Due to the fact that there is probably a few hundred lines of code within the plugin itself I’m not going to throw it all out there – instead I’ll just put up a few examples of how I call the Script Execution API from within the PHP code.

First up we need to import the PHP SDK for Google declare some variables – I’ve stored my client id and secret (from the Google section above) in a wordpress option – so to recall these we simply just need to do the following…

require_once 'Google/autoload.php';
 
$client_id = get_option('fsGoogleClientID');
$client_secret = get_option('fsGoogleClientSecret');
$redirect_uri = get_option('fsGoogleRedirectURI');

Now that we have this information we can start setting up the objects we need to interact with our Google script as shown below, storing everything we need in the client object.  As you can see I’ve also specified the scopes in which the API requests will fall under.

$scriptId = get_option('fsGoogleScriptID');
$client = new Google_Client();
$client->setApplicationName("Process Friday Shorts");
$client->setClientId($client_id);
$client->setClientSecret($client_secret);
$client->setRedirectUri($redirect_uri);
$client->setScopes(array('https://www.googleapis.com/auth/drive','https://spreadsheets.google.com/feeds','https://www.googleapis.com/auth/spreadsheets'));
$client->setAccessType('offline'); // Gets us our refreshtoken

As far as processing one of the functions in the Google script that can be done as shown below.  I’ve also shown the code I use to display data on my plugin pages so you can sort of visualize what is happening.  The function getAllLinks is passwed through the setFunction function (alot of function in there :)) and you can see how I go about parsing the response back from the API call to build out an HTML table containing data from my spreadsheet.

function getAllLinks()
{
global $client,$scriptId;
$client->setAccessToken($_SESSION[‘token’]);
$service = new Google_Service_Script($client);
$request = new Google_Service_Script_ExecutionRequest();
$request->setFunction(‘getAllLinks’);
$response = $service->scripts->run($scriptId, $request);
$resp = $response->getResponse();
$articles = $resp[‘result’];

//build html table and return
$content = ‘<table width=”98%” bgcolor=”white”><TR><TD><input type=”checkbox” onchange=”fridayshorts_links_checkall(this)” name=”checkAll” value=”all”></TD><TD><B>Title</B></TD></TR>’;
foreach ($articles as $article)
{
$content .= ‘<TR><TD><input type=”checkbox” name=”art[]” value=”‘.$article.'”></TD><TD>’.$article.'</TD></TR>’;
}
$content .= ‘</TABLE>’;
return $content;

}

Anyways, enough with the php code – if you are in dire need of it just let me know and I’ll send it to you – way to boring to go through it all line by line.  In the end though I’m left with nice little GUI that allows me to select which items I’d like to include in my Friday Shorts post as shown below…

fridayshortsplugin

Once I’ve selected which posts I’d like to include within my new Friday Shorts post I can go ahead and click Create Draft.  What happens then is a new draft is created within WordPress in a format that I specified for my Friday Shorts posts.  The code to do so is as follows

$my_post = array(
'post_content' => $content,
'post_title' => "Friday Shorts",
'post_status' => 'draft',
'tags_input' => 'Friday Shorts'
);
$post_id = wp_insert_post( $my_post, true );

Additionaly, remember that markArticleProcessed function within our Google Script – it’s called as well, passing the parameter of each link as it loops over them and sending that id back to the Google Script using the setParameters function on the request object – as follows…

function markArticleProcessed($articleid)
{
global $client,$scriptId;
$client->setAccessToken($_SESSION['token']);
$service = new Google_Service_Script($client);
$request = new Google_Service_Script_ExecutionRequest();
$request->setFunction('markArticleProcessed');
$request->setParameters($articleid);
$response = $service->scripts->run($scriptId, $request);
$resp = $response->getResponse();
$article = $resp['result'];
 
return;
 
}

So now you know just how far I will go in order to maintain my comfort level of laziness – honestly, automation is key in my life and anything I can automate means more time for creativity!  Again, I’m sorry I couldn’t go deeper into the PHP/Wordpress plugin development – it would just be one heaping pile of code on a page that makes no sense – but if you are interested definitely get in touch with me and I will send it along!  Anyways, thanks for reading thus far and I hope this post helps you in some way automate something of your own!

Setting up VVOLs on HP 3PAR

3par_arrayAs I’ve recently brought a HPE 3PAR 7200 into production with an ESXi 6.0 U2 cluster I thought what a better time than now to check out just how VVOLs are implemented by HPE.
Although the tasks to do so aren’t difficult by any means I find the documentation around doing so is a bit scattered in different KB’s and documents between VMware and HPE, especially if you have upgraded to their latest firmware (3.2.2 MU2).

Pre-reqs

As far as prerequisites go there really isn’t that many other than ensuring you are up to date on both your 3PAR firmware and ESXi versions.  For the 3PAR you will need to ensure you are running at the very least 3.2.1.  In terms of vSphere – 6.0 or higher.  Also don’t forget to check your HBA’s on the VMware HCL and ensure that they are actually  supported as well, and note the proper firmware/driver combinations recommenced by VMware.

After spending the day(s) updating firmware (ugh!) it’s finally time to get going.

Step 1 – Time

NTP is your friend for this step.  Before proceeding any further you need to ensure that all of your hosts, vCenter Server and 3PAR are all synced in terms of time.  If you have NTP setup and running then you are laughing here, but if you don’t, stop looking at VVOLs and set it up now!  It should be noted that the 3PAR and the VMware infrastructure can be set to different time zones, however they must still be synced in terms of time!

Step 2 – Can we see the protocol endpoint?

At this stage we should actually check our ESXi hosts to ensure we can see the protocol endpoint on the 3PAR.  To do so we will need to ensure that we see the same WWN after running a couple of different commands.  First, as shown below, the ‘showport’ command on our 3PAR.  Circled is the WWN of our 3PAR array.  Make note of this!

showport

With the WWN of our storage array in memory we can now head over to our ESXi hosts.  SSH in and run the ‘esxcli storage core device list –pe-only‘ command.  This command will return any Protocol Endpoints visible from the ESXi host.  If all goes well we should see the same WWN that we did with showport, and the ”Is VVOL PE’ flag set to true – as shown below

pe-only

As you can see, we have a match so at least we have some visibility from our hosts!

Step 3 – VASA

showvasaAs we all know the whole concept of VVOLs requires the array to support VASA 2.0 and act as a storage provider for vCenter – this is what allows us to create our VM profiles and have the array automatically provision VVOLs depending on what profile is selected.  On the 3PAR we can check the status of VASA by simply running the ‘showvasa’ command.  In the case shown we can see that it is already enabled and functioning properly, however this wasn’t always the case for me.  To enable the service I first tried the ‘startvasa’ command, however it was complaining about not having a certificate.  The easiest way, if you plan on using self-signed certificates to generate one is to simply run the ‘setvasa -reset’ command.  This will reset your VASA configuration and generate a self-signed cert.  After this you can simply run ‘startvasa’ to get everything up and running…

Step 4 – Create the storage container

Now if you are following the HPE VVOL integration guide you won’t see this step, mainly because it was created around the 3.2.1 firmware, which would have already had a default, and only one storage container created for you.  If you are running 3.2.2 though you have the option to create more than one storage container, and by default comes with, well, no storage containers.  So before we go and register our vCenter with the VASA provider we first need to create a storage container to host our VVOL datastore.  First, create a new Virtual Volume set with the following command

createvvset myvvolsetname

Then, let’s create our storage container and assign our newly created set to it

setvvolsc -create set:myvvolsetname

Again, these commands wouldn’t be required in 3.2.1 as far as I know, but are in 3.2.2

Step 5 – Register our VASA within vCenter

Now it’s time to head over the familiar, lightening fast interface we call the vSphere Web Client and register the 3PARs VASA implementation as a storage provider.  Make note of the ‘VASA_API2_URL’ shown in step 3 – you will need this when registering.  With your vCenter Server context selected, navigate to Manage->Storage Providers and click plus sign to add a new storage provider.

registerprovider

Enter your VASA URL from step 3, along with a name, username, and password and click ‘OK’.  For this instance I’ve used 3paradm, but you may be better off investigating creating a new account with just the ‘service’ role within the 3PAR.  Either way, get your new storage provider registered in vCenter and wait for the Status to show as Online and active.

Step 6 – The VVOL datastore

We are almost there I promise!  Before we can deploy VMs within a VVOL or assign storage profiles to match certain CPG’s within the 3par we need to have our VVOL datastore setup within vCenter.  I found the best spot to create this datastore is by right clicking the Cluster or ESXi host we want to have access to VVOLs and selecting Storage->New Datastore.  Instead of selecting VMFS or NFS as we normally would, select VVOL as the type as shown below

vvoltype

On the next screen simply give your datastore a name and select the storage container (this is what we made available in step 4).  Then, simply select the hosts you wish to have access to deploy VVOLs to and away you go!

Step 7 – Storage Profiles

At this point you could simply deploy VMs into your newly created VVOL datastore – the 3PAR will intelligently chose the best CPG to create the VVOL in, but the power really comes by being able to assign certain VM storage profiles to our disks, and having the VVOL go to the proper CPG depending on the array capabilities. Storage Profiles are created by clicking on the Home icon and navigating to Policies and Profiles within the web client.  In the VM Storage Profiles section simply click the ‘Create new storage profile’ button.  Give your new profile a name and continue on to the Rule-Set section.

profile

The Rule sets of my “Silver” VM storage profile are shown above.  As you can see, I’ve specified that I want this storage profile to place VM disks within my FastClass raid 5 CPG, and place their subsequent snapshots in the SSD tier CPG.  When you click next you will presented with a list of the compatible and incompatible storage.  Certainly select your compatible storage and click next.  Once we have all of the profiles we need we can simply assign them to our VMs disk as shown below…

vmprofile

As you can see I’ve selected our newly created “Silver” policy for our new VM.  What this states is that when this VM is created a new VVOL will be created on our FastClass disks on the 3PAR housing the VMs.

Step 8 – VVOL visibility

Although we are technically done with deploying VVOLs at this point I wanted to highlight the showvvvolvm command that we can utilize on the 3PAR in order to gain visibility into our VVOLs.  The first being simply listing out all of our VMs that reside on VVOLs within the 3PAR.

showvvolvm -sc

showvvol1

As you can see by the Num_vv column we have 3 VVOLs associated with our VM (MyNewVM).  But how do we get the information on those VVOLs individually – we can use the same command just with the -vv flag

showvvolvm -sc -vv

showvvol2

So now we can see that we have 1 VVOL dedicated for the config, 1 VVOL dedicated for the actual disk of the VM, and finally 1 VVOL hosting a snapshot that we have taken on the VM.

Anyways, that’s all I have for now – although I haven’t gone too deep into each step I hope this post helps someone along the way get their VVOLs deployed as I had a hard time finding all of this information in one spot.  For now I like what I see between HP and VMware concerning VVOLs – certainly they have a long road ahead of them in terms of adoption – we are still dealing with a 1.0 product from VMware here and there are a lot of things that need to be worked out concerning array based replication, VASA high availability, functionality without VASA, GUI integration, etc – but that will come with time.  Certainly VVOLs will change the way we manage our virtualized storage and I’m excited to see what happens – for now, it’s just fun to play with 🙂  Thanks for reading!

Friday Shorts – #TOVMUG, Ravello, Veeam Vanguard, vExpert and more…

Doughnuts!  I told you I don’t like ethnic food! – Mr Burns, The Simpsons

This edition of Friday Shorts isn’t really going to be that short at all – It’s been a while since I’ve done one and there is a ton of content out there that I wanted to re-share for people so buckle up, here it is!

2016 Toronto VMUG UserCon !

tovmugThursday, February 25th marked the date for the 2016 Toronto VMUG Full Day UserCon and I couldn’t be happier as to how it turned out!  Honestly, as a leader I was interested the whole day about how many attendees we had and as it turned out we had our best year ever with 800+ registrations and 605 brains that walked through the doors!  Last year we had 601 and I’m sure it would have been more for this year but you know, Canadian winters eh!  We had Nick Marshall do the morning keynote and his talk was awesome – I heard nothing but good things from the attendees in regards to the content Nick had to talk about (BTW it was dubbed Building your IT Career)  If you get a chance to have Nick out for your VMUG do so!  He’s awesome!  For our lunchtime keynote we had a VCDX panel – Tim Antonowitz, James Wirth, Joe Silvagi and Eiad Al-Aqqad graciously volunteered to sit up on stage as we had some great discussion with the attendees at the conference from everything dealing with certification through to NSX integration!  All in all it was a great day and if you were there I hope you enjoyed it – if you weren’t, come next year!

What does Docker, vExpert, Ravello and Netflix have in common?

@h0bbel that’s what!  More explanation – well, Christian is a vExpert, vExperts get some free CPU hours on Ravello Systems (Ravello is awesome BTW – be nice Oracle!), and he has a great post on his blog on how to setup Dockerflix within Ravello on Photon in order to test (get around) those geo-blocking Netflix firewall “you can’t watch this” setups!   So if you are say a vExpert blogger that lives in Canada that really fancies yourself a couple of hours of Legally Blonde I suggest you set the Maple Syrup ladle down and head over to Christians post!.

Speaking of Ravello – How’s about some community cmdlets?

PowerShell-LogoAnd when you think of community in the sense of this blurb just think of Luc Dekens being a one-man community!  If you are Ravello user and fancy yourself some PowerShell cmdlets Luc has an updated module on his blog available for download.  Luc has certainly put a lot of effort into this module which contains hooks into almost every single API Ravello offers!  I certainly find this module very useful for my work with Ravello and just want to give Luc a big Thank You for this!

Running commands on the VCSA without SSH

commandlineIf I was being shipped off to a deserted island and could only take with me the RSS feed of one virtualization blog I have to think that as of today that blog would be William Lams virtuallyGhetto!  William just seems really good at figuring things out – is that a line that can go on a resume?  I don’t know, either way, his recent post on how to run commands on the VCSA remotely without enabling ssh is pretty awesome!   He utilizes the Guest Operations API through a vSphere SDK to do so!  Just go read it, it’s gold Jerry, it’s gold!  goes here it is

 #ChrisWahlFacts – He doesn’t mess around when it comes to dropping knowledge about PowerShell and REST APIs

apiOver yonder at WahlNetwork.com Chris Wahl has a great series going on dubbed Automation for Operations – Honestly the whole series is great but its the last 4 parts that have really helped me out…a lot!  Chris has done a great job at explaining a bunch of concepts around utilizing PowerShell to connect to RESTful API’s, including Authentication, Processing GET Requests, Sending data with POST/PUT/PATCH, and his latest, creating hashtables for JSON payloads!

Veeam Vanguard nominations are open!

veeam_vanguard-700x224Ever wake up and think “Hey, Why isn’t so and so a Veeam Vanguard?”  or “Why am I not a Veeam Vanguard?”  Well, so long as you wake up wondering about that before March 30th you have a chance to throw your name or someone whom you think is worthy into the mix!  You can check out the official Veeam post here.

vExpert stuff!

vmware-vexpert-2013We all know that being a vExpert isn’t about what you get, but more about what you give – buuuuutttt, the fact of the matter is you do get stuff, sometimes lots of stuff, and it’s hard to keep track of it all!  Thankfully for the vExpert community Andrea Mauro is doing a great job at keeping track of it all for you!  Without posts like this there is no way I’d be able to keep track of it!  So, Thanks Andrea!