Tag Archives: API

Consuming the Veeam REST API in PowerShell – Part 1 – Starting a job

Since the version 7 release of Veeam Backup & Replication all of the typical Enterprise Manager functionality has been exposed via an XML REST API.  Being a pretty hefty user of vRealize Orchestrator this has proven to be extremely useful to me when looking to automate certain parts of my infrastructure.  Now that said, there are times when vRO is simply out of reach – or when the person I’m creating the automation for is simply more familiar with PowerShell.  Now, I understand that Veeam Backup & Replication does come with PowerShell support and what I’m about to walk through may be somewhat redundant as they have their own cmdlets built around certain tasks – but this crazy IT world we live in is changing and REST seems to be at the helm of that.  We are seeing more and more vendors first creating a REST API and then consuming that themselves in order to provide customers with a GUI front-end.

So, in the spirit of learning how to work with the Veeam REST API I decided I’d take the time to document out how to perform some of the sample functions within their API reference using nothing but PowerShell.   This first post, will deal solely with how to start an existing Veeam Backup & Replication job.   Keep in mind the shear nature of REST is that although the bodies and headers may change, the process of consuming it is relatively the same no matter what the application – so there is some valid learning to be had regardless of the end product.

PowerShell and interacting with REST.

Before jumping right into Veeam specifics we should first discuss a few things around the PowerShell cmdlet we will need to use – as well as specifics around the Veeam Enterprise Manager REST API itself.  REST APIs are nothing more than simple http requests sent to an endpoint – meaning they are consumed by simply sending a request, be it a GET, PUT, POST, etc. – whatever the API supports to a uri.  From there, the API takes a look at what was passed and returns back what it normally would with an http request – a header, a status code, and a body – Its this response that we need to parse in order to discover any details or information pertaining to our request – it lets us know whether or not the operation was successful, and passes back and valid data as it relates to the request.  Now, in Veeams case they use an XML based API for Enterprise Manager.  This means we can expect to see the response body in an xml format – and, if at all we need to create a body to pass to the request, we would need to first form that body in an xml format before we sent it!  Now all of this sounds kind of difficult – but in the end it really isn’t – and you will see that as we create our first script!  Really, there are two key PowerShell specifics we are using….

  • Invoke-WebRequest – this is the cmdlet in which we use to send the API call, passing a uri, method, and sometimes a header
  • XML – this is a simple way to take our response and label/cast it as xml in order to more easily parse and retrieve the desired information from it

So with that said, let’s get scripting…

First Step – Get a SessionId

The first step in any API consumption is usually authentication – and aside from the scopes and methods themselves this is normally where we see the most discrepancies between vendors.  With Veeam we simply send a POST request to the sessionMngr resource type and retrieve a sessionId.  It’s this sessionId which will then need to be included within the header of all subsequent requests to the API – this is how we are identified and authenticated.   Now you could send a get request to the root of the API scope and parse through all of the returned content to find a specific versions uri if you wanted – but I happen to know that we can simply use ?v=latest within Veeam to always use the latest and greatest version.  So let’s go ahead and authenticate against the API and retrieve our sessionId with the following code

$response = Invoke-WebRequest –Uri “http://localhost:9399/api/sessionMngr/?v=latest" -Method "POST" -Credential (Get-Credential)
$sessionId = $response.Headers["X-RestSvcSessionId"]

ps1[8]

Looking at the code above we are basically doing a couple of things – first,  we issue our request to the http://localhost:9399/api/sessionMngr/?v=latest to our uri, and also have the system prompt us for credentials as this will be performing the  actual authentication.  And lastly, we parse our returned Headers in the response in order to grab our sessionId.  So if all goes well, you should be left with a string in a similar format to the one shown below stored in our sessionId variable – and now we are authenticated and ready to start requesting…

Now let’s start that job!

So the first example in the REST API Reference is starting a specific job – to do this we first need to get the uri for the jobs resource.  Now we could go ahead and simply look this up in the reference guide as it has all the information (***hint*** its http://localhost:9399/api/jobs) – but where’s the fun in that?  The response we have just received from logging in has all of the information we need to grab the uri programmatically – and, should things ever change we won’t have to rewrite our code if we grab it from the response.  So, to get the proper uri we can use the following one-liner to parse our content as xml and find the correct child node…

$uri = (([xml]$response.Content).LogonSession.Links.Link | where-object {$_.Type -eq 'JobReferenceList' }).Href

Now that we have the proper uri we can go ahead and make a GET request to it to return a list of jobs within Enterprise Manager.  But, remember we have to pass that sessionId through the request header as well – so in order to do this we issue the following commands…

$response = Invoke-WebRequest -Uri $uri -Method "GET" -Headers @{"X-RestSvcSessionId" = $sessionId}

Again, our $response.Content will contain a lot of information, including all of our job names and subsequent metadata with them.  So, in order to find the proper uri for my job (Backup Scoreboard) I can  use the following command to once again retrieve the uri for our next call.

$uri = (([xml]$response.Content).EntityReferences.Ref.Links.Link | Where-object {$_.Name -eq 'Backup Scoreboard'}).Href

Once we have that – we again send a GET request to the new uri

$response = Invoke-WebRequest -Uri $uri -Method "GET" -Headers @{"X-RestSvcSessionId" = $sessionId}

Again, we get a lot of information when looking at our $response.Content – but let me somewhat format it for you below so we can see what we have…

ps1[10]

As you can see we have a few different Href’s available to grab this time – each relating to a different action that can be taken on our job.  In our case we are looking at simply starting the job so let’s go ahead and grab that uri with the following command…

$uri = (([xml]$response.Content).Job.Links.Link | Where-object {$_.Rel -eq 'Start'}).Href

And finally, to kick the job off we send, this time a POST request, using the uri we just grabbed…

$response = Invoke-WebRequest -Uri $uri -Method "POST" -Headers @{"X-RestSvcSessionId" = $sessionId}

Now if everything has went as intended we should be able to pop over to our VBR Console and see our job running.  Now wasn’t that way easier than right clicking and selecting Start Smile.  One thing I should note is that we can parse this body as well and grab our taskId for the job we just started – from there we can go ahead and query the tasks resource to figure out its’ status, result, etc..  For those that learn better by simply seeing the complete script I’ve included it below (and in fairness running this script is faster than right-clicking and selecting ‘Start’).  In our next go at PowerShell and the Veeam API we will take a look at how we can instantiate a restore – so keep watching for that…  Thanks for reading!

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
$backupjobname = "Backup Scoreboard"
#Log in to server
$response = Invoke-WebRequest –Uri “http://localhost:9399/api/sessionMngr/?v=latest" -Method "POST" -Credential (Get-Credential)
#Get Session Id
$sessionId = $response.Headers["X-RestSvcSessionId"]
# Get Job Reference link
$uri = (([xml]$response.Content).LogonSession.Links.Link | where-object {$_.Type -eq 'JobReferenceList' }).Href
# List jobs
$response = Invoke-WebRequest -Uri $uri -Method "GET" -Headers @{"X-RestSvcSessionId" = $sessionId}
# get specific job from list
$uri = (([xml]$response.Content).EntityReferences.Ref.Links.Link | Where-object {$_.Name -eq $backupjobname }).Href
#get job actions
$response = Invoke-WebRequest -Uri $uri -Method "GET" -Headers @{"X-RestSvcSessionId" = $sessionId}
#get start action
$uri = (([xml]$response.Content).Job.Links.Link | Where-object {$_.Rel -eq 'Start'}).Href
#Start job
$response = Invoke-WebRequest -Uri $uri -Method "POST" -Headers @{"X-RestSvcSessionId" = $sessionId}

Automation using the Nakivo API

apiThe Software Defined Data Center – It’s everywhere.  You can’t go to any big trade show in the IT industry without hearing the phrase “Software Defined X” being tossed around at all of the booths.  Over the last decade or so we have seen software take center stage in our data centers – being the glue that holds everything together.  With this focus on software it’s extremely important that companies develop and support API’s within their products – one, it’s our way of taking application x and integrating it with application y.  Secondly, its important for the success of the company – without an API organizations may look elsewhere for a solution that provides one – and without an API vendors cannot securely control the access into their solutions, leaving customers developing unsupported and faulty applications to get around it.

One big example that shows the benefit of API integrations that I always like to use is that of the deployment of a VM.   Sure, we use our Hypervisor of choice to take our templates and clone VMs from them, providing some sort of automation and orchestration around the configuration of that said VM – but the job doesn’t simply end here – we have monitoring solutions we may need to add our VM into, we have IP management tools in order to integrate into to retrieve IPs and DNS information, and most importantly, we have to ensure that our newly created VM is adequately protected in terms of backup and recovery.   With so many hands inside of the data center creating VMs our backup administrators might not always know a certain solution has been created – and when a failure occurs, there’s a pretty good chance we won’t be able to recover without any backups – so it’s this situation we will look at today…

Automatically protecting our VMs

Our software of choice today will be Nakivo Backup and Replication – a company based out of Silicon Valley providing data protection solutions.   Nakivo provides full API integration into their backup suite allowing administrators and developers to create automation around the creation, modification, and removal of jobs.  The scope of our integration will be as follows – Let’s create a simply vRealize Orchestrator workflow that will allow us to simply right-click a VM from within the vSphere Web Client, and add this VM into an already existing backup job.  From here I’ll let your imagination run wild – maybe you integrate this code into your VM deployment workflow to automatically protect it on creation – the point is that we have a starting point to look at the possibilities of consuming Nakivo’s API and creating some automation within your environment for backup and recovery.

nakivoapi-apidoc

A little about the Nakivo API

Before we get into the actual creation of the vRO workflow it’s best we understand a little bit about the Nakivo API itself.  Nakivo provides an API based around JSON content – so all of our requests and responses will be formatted within JSON format.  These requests will all go through using POST, and are always provided to the /c/router realm (ie https://ip_of_nakivo:4443/c/router) As far as authentication goes Nakivo utilizes cookie based authentication – what this means is that our first request will be sent to the login method, upon which we will receive a JSESSIONID which we will have to pass with every subsequent request in order to secure our connection.  As we can see from the example request below they need to be formatted in such a way that we first specify and instance (IE AuthenticationManagement, BackupManagement, InventoryManagement, etc) and a method (IE login, saveJob, getJob, etc).  From there we attach the data associated with the method and instance, as well as a transaction id (tid).  The transaction id can utilize an auto increment integer if you like, or can simply be set to any integer – it’s main purpose is to group multiple method calls into a single POST – which we won’t be doing anyways so you will see I always use 1.

var requestJSON = “{‘action’: ‘AuthenticationManagement’,’method’:’login’,’data’: [admin,VMware1!,true],’type’: ‘rpc’,’tid’: 1}”;

Above we show an example of a login request in JavaScript, because this is the language of choice for vRealize Orchestrator which we will be using – but do remember that you could use PHP/JAVA/PowerShell – whatever language you want so long as you can form an HTTP request and send JSON along with it.

On with the workflow

Before diving right into the code it’s best to take a look at the different sections or components that we will need to run through in order to add a given VM to a Nakivo job through vRealize Orchestrator.  With that said we can break the process down into the following sections…

  • Add Nakivo as an HTTPRest object within vRO
  • Create workflow w/ VM as an input object and the Nakivo HTTPREST as an argument
  • Create some variables in regards to our VM (IE Name, Cluster, etc)
  • Login to Nakivo to retrieve session
  • Retrieve our target job
  • Find VMs Cluster ID within Nakivo (ClusterID is required in order to find the actual VM within Nakivo).
  • Gather VM information from within Nakivo
  • Gather information about our repository from within Nakivo
  • Build  JSON request and add VM to job

With our workflow broken down into manageable chunks let’s go ahead and start coding

Add Nakivo as an HTTPRest object.

If you have ever worked with the HTTPRest plugin within vRO then this will seem like review – however for those that haven’t let’s take a look at the process of getting this setup.  From within workflow view simply run the ‘Add a REST host’ workflow located under the HTTP-REST/Configuration folders.  As far as parameters go simply give the host a name, use https://ip_of_nakivo:4443 as the URL, and be sure to select ‘Yes’ under the certification acceptance as shown below

nakivoapi-addhost

The remaining steps are somewhat invalid as it pertains to adding Nakivo as a REST host within vRO – for authentication I selected basic and provided the credentials for Nakivo – however this really doesn’t matter as we are going to use cookie/header based authentication through our code anyways – however something needs to be selected and inputted within vRO.  After clicking submit the NakivoAPI REST host should be added to our vRO inventory.

Workflow creation

As far as the workflow goes I’ve tried to keep it as simple as possible, requiring only 1 input attribute and 1 input parameter as follows

  • Input Attribute (Name: NakivoAPI – Type: RESTHost – Value: set to the Nakivo object created earlier

Nakivoapi-attribute

  • Input Parameter (Name: sourceVM – Type: VC:VirtualMachine )

nakivoapi-parameter

Code time!

After this simply drag and drop a scriptable task into the Schema and we get started with the code!  I’ve always found it easier to simply just display all the code and then go through the main sections by line afterwards.  As far as the javascript we need you can find it below…

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
var vmName = sourceVM.name
var cluster = System.getModule("com.vmware.library.vc.cluster").getComputeResourceOfVm(sourceVM);
var clusterName = cluster.name;
 
// login and retreive sessionID
var requestJSON = "{'action': 'AuthenticationManagement','method':'login','data': [admin,VMware1!,true],'type': 'rpc','tid': 1}";
var request = NakivoAPI.createRequest("POST", "/c/router", requestJSON);
request.setHeader("Content-Type","application/json");
var response = request.execute();
var headers = response.getAllHeaders();
var cookie = headers.get("Set-Cookie");
 
// retrieve target job
requestJSON = "{'action': 'JobManagement','method':'getJob','data': [1],'type': 'rpc','tid': 1}";
request = NakivoAPI.createRequest("POST","/c/router",requestJSON);
request.setHeader("Content-Type","application/json");
request.setHeader("Cookie", cookie);
response = request.execute();
var jsonResponse = JSON.parse(response.contentAsString);
var job = jsonResponse.data;
 
// find clusterID
requestJSON = "{'action': 'InventoryManagement','method':'collect','data': [{'viewType':'VIRTUAL_ENVIRONMENT'}],'type': 'rpc','tid': 1}";
request = NakivoAPI.createRequest("POST","/c/router",requestJSON);
request.setHeader("Content-Type","application/json");
request.setHeader("Cookie", cookie);
response = request.execute();
jsonResponse = JSON.parse(response.contentAsString);
 
// reduce to datacenters
var vcenter = jsonResponse.data.children[0];
var datacenters = vcenter.children;
var datacenter;
var cluster;
for ( var p in datacenters)
{
	for (var c in datacenters[p].children)
	{
		if (datacenters[p].children[c].name == clusterName)
		{
			cluster = datacenters[p].children[c];
		}
	}
}
var clusterid = cluster.identifier;
 
// look in cluster for VM info...
requestJSON = "{'action': 'InventoryManagement','method':'list','data': [{'nodeType':'VMWARE_CLUSTER','nodeId': '" + clusterid + "','includeTypes': ['VM'] }],'type': 'rpc','tid': 1}";
request = NakivoAPI.createRequest("POST","/c/router",requestJSON);
request.setHeader("Content-Type","application/json");
request.setHeader("Cookie", cookie);
response = request.execute();
jsonResponse = JSON.parse(response.contentAsString);
var vms = JSON.parse(response.contentAsString);
vms = vms.data.children;
var vm;
for (var p in vms)
{
	if (vms[p].name == vmName)
	{
		vm = vms[p];
	}
}
 
// get more info on VM
requestJSON = "{'action': 'InventoryManagement','method':'getNodes','data': [true, ['"+ vm.vid + "']],'type': 'rpc','tid': 1}";
request = NakivoAPI.createRequest("POST","/c/router",requestJSON);
request.setHeader("Content-Type","application/json");
request.setHeader("Cookie", cookie);
response = request.execute();
var vminfo = JSON.parse(response.contentAsString);
vminfo = vminfo.data.children[0];
var vmdisk = vminfo.extendedInfo.disks[0].vid;
 
// get target storage
requestJSON = "{'action': 'InventoryManagement','method':'list','data': [{'includeTypes': ['BACKUP_REPOSITORY'] }],'type': 'rpc','tid': 1}";
request = NakivoAPI.createRequest("POST","/c/router",requestJSON);
request.setHeader("Content-Type","application/json");
request.setHeader("Cookie", cookie);
response = request.execute();
jsonResponse = JSON.parse(response.contentAsString);
var targetVid = jsonResponse.data.children[0].vid;
 
//build data portion of JSON to add VM to job
var jsonSTR = '{ "sourceVid": "' + vminfo.vid + '","targetStorageVid": "' + targetVid + '","mappings": [{"type": "NORMAL","sourceVid": "' + vmdisk + '"}], "appAwareEnabled": false}';
var json = JSON.parse(jsonSTR);
 
//push new object to original job
job.objects.push(json);
System.log(JSON.stringify(job));
 
// let's try and push this back in now....
requestJSON = "{'action': 'JobManagement','method': 'saveJob', 'data': [" + JSON.stringify(job) + "],'type': 'rpc','tid': 1}";
request = NakivoAPI.createRequest("POST","/c/router",requestJSON);
request.setHeader("Content-Type","application/json");
request.setHeader("Cookie", cookie);
response = request.execute();
 
// done!!!!

Lines 1 -3 – Here we simply setup a few variables we will need later on within the script; vmName, which is assigned the name attribute of our input parameter sourceVM as well as going out and running a built in action to get the name of the cluster that the VM belongs to.  Both these variables will be needed when we attempt to get all of the information we need to add the VM to the backup job.

Lines 5-11 – This is our request to login to Nakivo.  As you can see we simply create our request and send the login method, along with the associated login data to the AuthenticationManagement interface.  This request basically authenticates us and sends back the JSSESSIONID that we need in order to make subsequent requests, which we store in a cookie variable on line 11.

Lines 13 – 20 – Again, we make a request to Nakivo to get our job that we want to add the backup to.  I only have one job within my environment so I’ve simply utilized the getJob method and sent the data type of 1 (the jobID) since I know that is my one and only job id in the system.  If need be you may need to write a similar request to get the job id if you don’t know it – Nakivo does provide methods within their API to search for a job ID by the job name.  Also note, since this is a subsequent request after a login we are sending our cookie authentication data on line 17 – also, we are taking our response data and storing it in a variable named job on line 20 – we will need this later when we update the job.

Lines 22 – 45 – This is basically a request to the InventoryManagement interface that we can use to find out what the id (as it is within Nakivo) of the cluster housing the virtual machine.  First, on line 23 we build a request to basically return our complete virtual infrastructure inventory – upon which we parse through from lines 35-44 looking for a match on our cluster names.  I’ve  had to loop through data centers as my test environment contains more than one virtual data center.  Finally, on line 45 we simply assign the clusterid variable the nakivo identifier of the cluster.

Lines 47 – 73 – Here we use our cluster identifier and basically list out the VM inventory within it.  After looping through when we find a match on our VM name, we simply assign it to a vm variable.  We then, on line 66 send a request to the InventoryManagement interface again, this time looking at the Virtual Machine level and sending the identifier of our newly discovered VM.  Once we have the response we assign the identifier of the VMs disk(s) on Line 73 to a variable.  Again, I know this environment and I know the VM only contains one disk so I’ve hard coded my index – if it was unknown, or truly automated you would most likely have to loop through the disks here to get your desired output.

Lines 75 – 82 – This block of code is used to get the identifier of the target storage, or repository within Nakivo.  Again we need this information for our final request that will add the VM to the job – and again, this is a known environment so I could simply hard code my array index on line 82 to return the proper repository (as there is only one).

Lines 84 – 90 – Here we are simply building out the JSON variable that we need in order to push all of the information we have previously gathered above.  We basically form our string on line 85, convert it to JSON directly after, and push it into the original job variable we set on line 20.

Lines 92-99 – Ah, finally – This block basically takes all of our hard work and pushes the job back into the saveJob method of the Nakivo JobManagement interface.  Once executed you should manually see your job info within Nakivo update reflecting the new VMs added to the job.

So there you have it! A completely automated way of selecting a VM within vRealize Orchestrator and adding it to a Nakivo backup job – all without having to open up the Nakivo UI at all!

But wait, there’s more!

Ahh – we eliminated the need of opening up the Nakivo UI but how about eliminating the Orchestrator client as well – and simply just executing this job from directly within the vSphere Web Client – sounds like a good idea to me!  If you have properly, and I say that because it can sometimes be difficult – but if you have properly integrated vRO and vSphere then doing this is a pretty easy task.

Within the ‘Context Actions’ tab on our vRO configuration within the web client simply click ‘+’ to add a new action.  As shown below we can simply browse our workflow library and select our newly created Nakivo workflow and associate that with the right-click context menu of a virtual machine.

nakivoapi-context

What we have essentially done now is allowed our administrators to simply right-click on a VM, browse to ‘All vRealize Orchestrator Actions’ and click on our workflow name.  From there the vRO workflow will take the associated VM (the one we right-clicked on) and assign it to our sourceVM parameter – meaning we’ve taken the complete process of logging into Nakivo, editing our backup job, adding a new VM, and saving it and converted it to a simple right click, followed up by a left click – without having to leave the vSphere Web Client!

nakivoapi-rightclick

So in all this is a pretty basic example of some of the things we can do with the Nakivo API – and it followed a pretty simple and stripped down workflow – but the point is Nakivo offers a wide variety of methods and integration points into their product.  Pretty much anything you can do within the GUI can be performed by making calls to the API.  This is what helps a product integrate into the Software Defined Data center – and what allows administrators to save time, provide consistency, all the while ensuring our data is protected.  Nakivo also has a wide variety of documentation and also a Java SDK built around their API, complete with documents and explanations around all of the interfaces provided.  If you are interested in learning more, about Nakivo’s API or Nakivo’s products in general head on over to their site here – you can get started for the low cost of free!  Until next time, happy automating!

Friday Shorts – #TOVMUG, Ravello, Veeam Vanguard, vExpert and more…

Doughnuts!  I told you I don’t like ethnic food! – Mr Burns, The Simpsons

This edition of Friday Shorts isn’t really going to be that short at all – It’s been a while since I’ve done one and there is a ton of content out there that I wanted to re-share for people so buckle up, here it is!

2016 Toronto VMUG UserCon !

tovmugThursday, February 25th marked the date for the 2016 Toronto VMUG Full Day UserCon and I couldn’t be happier as to how it turned out!  Honestly, as a leader I was interested the whole day about how many attendees we had and as it turned out we had our best year ever with 800+ registrations and 605 brains that walked through the doors!  Last year we had 601 and I’m sure it would have been more for this year but you know, Canadian winters eh!  We had Nick Marshall do the morning keynote and his talk was awesome – I heard nothing but good things from the attendees in regards to the content Nick had to talk about (BTW it was dubbed Building your IT Career)  If you get a chance to have Nick out for your VMUG do so!  He’s awesome!  For our lunchtime keynote we had a VCDX panel – Tim Antonowitz, James Wirth, Joe Silvagi and Eiad Al-Aqqad graciously volunteered to sit up on stage as we had some great discussion with the attendees at the conference from everything dealing with certification through to NSX integration!  All in all it was a great day and if you were there I hope you enjoyed it – if you weren’t, come next year!

What does Docker, vExpert, Ravello and Netflix have in common?

@h0bbel that’s what!  More explanation – well, Christian is a vExpert, vExperts get some free CPU hours on Ravello Systems (Ravello is awesome BTW – be nice Oracle!), and he has a great post on his blog on how to setup Dockerflix within Ravello on Photon in order to test (get around) those geo-blocking Netflix firewall “you can’t watch this” setups!   So if you are say a vExpert blogger that lives in Canada that really fancies yourself a couple of hours of Legally Blonde I suggest you set the Maple Syrup ladle down and head over to Christians post!.

Speaking of Ravello – How’s about some community cmdlets?

PowerShell-LogoAnd when you think of community in the sense of this blurb just think of Luc Dekens being a one-man community!  If you are Ravello user and fancy yourself some PowerShell cmdlets Luc has an updated module on his blog available for download.  Luc has certainly put a lot of effort into this module which contains hooks into almost every single API Ravello offers!  I certainly find this module very useful for my work with Ravello and just want to give Luc a big Thank You for this!

Running commands on the VCSA without SSH

commandlineIf I was being shipped off to a deserted island and could only take with me the RSS feed of one virtualization blog I have to think that as of today that blog would be William Lams virtuallyGhetto!  William just seems really good at figuring things out – is that a line that can go on a resume?  I don’t know, either way, his recent post on how to run commands on the VCSA remotely without enabling ssh is pretty awesome!   He utilizes the Guest Operations API through a vSphere SDK to do so!  Just go read it, it’s gold Jerry, it’s gold!  goes here it is

 #ChrisWahlFacts – He doesn’t mess around when it comes to dropping knowledge about PowerShell and REST APIs

apiOver yonder at WahlNetwork.com Chris Wahl has a great series going on dubbed Automation for Operations – Honestly the whole series is great but its the last 4 parts that have really helped me out…a lot!  Chris has done a great job at explaining a bunch of concepts around utilizing PowerShell to connect to RESTful API’s, including Authentication, Processing GET Requests, Sending data with POST/PUT/PATCH, and his latest, creating hashtables for JSON payloads!

Veeam Vanguard nominations are open!

veeam_vanguard-700x224Ever wake up and think “Hey, Why isn’t so and so a Veeam Vanguard?”  or “Why am I not a Veeam Vanguard?”  Well, so long as you wake up wondering about that before March 30th you have a chance to throw your name or someone whom you think is worthy into the mix!  You can check out the official Veeam post here.

vExpert stuff!

vmware-vexpert-2013We all know that being a vExpert isn’t about what you get, but more about what you give – buuuuutttt, the fact of the matter is you do get stuff, sometimes lots of stuff, and it’s hard to keep track of it all!  Thankfully for the vExpert community Andrea Mauro is doing a great job at keeping track of it all for you!  Without posts like this there is no way I’d be able to keep track of it!  So, Thanks Andrea!

Ravello on my wrist – Pebble in the cloud

You can probably get the gist as to what this post might be about by the title but it does leave a little to the imagination.  For those who hate waiting for the point go ahead an watch this small video…

Before I get right into what I’ve done let me first provide a little background information as to why I’ve done this aside from just looking for “something geeky to do”.

Ravello-Systems-LogoFirst up I’ve pretty much let everyone know how much I heart Ravello Systems .  Not to go too deep but as I build up labs and environments for this blog and for other interests I really like to break things.  Why?  That’s how I learn best, breaking things, fixing them, then writing them down.  The problem is I seem to always be rebuilding or fixing before I can move onto my next project.  Ravello solves that issue for me – with Ravello I’m able to keep multiple blueprints of completely configured vSphere labs (different versions, different hardware configs) in the cloud.  When I’m feeling geeky I can simply deploy one of these as an application to either Google or Amazon and away I go.  If I break it to the point of no return it’s no biggie, I can simply redeploy!  Point and case it’s a time-saver for me!

Secondly I love to write code – it’s an odd passion of mine but it’s something I actually went to school for and never 100% pursued.  Meaning I love to write code….casually!  I couldn’t imagine dedicating my whole career to it, but having the knowledge of how to do it casually sure has helped me with almost every position I’ve held.

pebbleThirdly a little while I ago I purchased a Pebble watch.  I’m still not sure why I wanted a smartwatch but I knew if I had one I’d want it to be somewhat “open” and Pebble met those needs.  Using a service called CloudPebble and by turning on the development mode on the iPhone app I’m able to deploy custom applications to my Pebble – so that was a big seller when I was looking at watches – oh, and the fact that it’s only like $100 helps as well…

So on to the problem – I mentioned I love Ravello and have multiple applications setup within the service.  The applications are great, however it takes a good amount of time after powering one on before you are able to start using it.  Those vSphere services need time to initialize and boot.  My usual routine involves me logging into Ravello and powering on what I might need for the night before I leave work.  That way the initialization can happen during my commute, supper with my family, and bedtime routines and is ready to go when I am.  There are times though when I get half way home and realize I forgot to power on my labs, or I’m not near a computer and can’t be bothered to use the small iPhone screen.

There’s an app for that!

PowerOnRavelloFor these reasons I decided to try and figure out the Ravello APIs and the Pebble SDK and see if it was possible to create a small application to simply login into Ravello, select an existing application, and power it on!  It sounds simple enough but took a lot of trial and error – I had no clue what I was doing but in the end I was left with the solution below – and it works so I guess you could call it a success.

Prerequisites

There’s a few pieces that need to fall into place before any of this will work.  First up you wiill need a CloudPebble account.  CloudPebble is a development environment that allows us to write applications for use on the Pebble watch in either JavaScript or C.  You can use an existing Pebble account to log into CloudPebble or simply setup a new account – either way you need one and it’s free!

devmodeSecondly you will need to enable developer connections within the Pebble app on your phone.  This is easlily done by selecting ‘Developer’ within the main menu and sliding the switcher over.  Honestly, it’s a phone app I’m sure you can figure it out.

Thirdly lets go ahead and setup a project within CloudPebble You can do this by simply importing mine, or manually by giving your new project a name and select PebbleJS as your Project Type.  Once created you should be at a screen similar to that shown below…

cloudpebble

As you can see we have one source file (app.js).  This is the only source file we will need for this project.  If you imported my project you are done for now, but if you created a new project manually this file will be full of a bunch of example code on how to perform various functions and respond to different events within the Pebble interface – we won’t need any of this so go ahead and delete all the code within the file, but not the file itself.  We will replace it with all of this syntax –  explained in the next section.

The code

If you simply just want all the code to go through on your own go head and get that here.  For the rest of us I’ll try and explain the different blocks of code below…

1
2
3
4
5
// import required libraries
var UI = require('ui');
var ajax = require('ajax');
var Vector2 = require('vector2');
var Vibe = require('ui/vibe');

Lines 1 through 5 simply deal with importing the libraries we will be working with – UI will give us access to the Pebble UI, Ajax is what we will use for the Ravello API calls, Vector2 for use with positioning on items on the watch, and Vibe is simply so we can access the vibration features of the watch.

7
8
9
// setup authentication information
var encodedLogin = "mybiglongencodedstring";
var expirationTimeInSeconds = 600; // timeout for app

Lines 8 and 9 set up a couple of variables for the application.  First up, encodedLogin represents a base64 encoded string of the username and password you use to login to Ravello, with a “:” between them.  You can grab this by heading to https://www.base64encode.org/ and grabbing the encoded string using UTF-8 as the output – just don’t forget to place the : between (ie. I encoded “mwpreston@myemail.com:supersecretpassword”).  Copy the result and place assign it to the encodedLogin variable on Line 8

Line 9 deals with our expiration time – When we power on an application within Ravello we need to specify an auto power off parameter which states how long we want before the application powers itself down.  You don’t want to use up all those valuable CPU hours right?  The variable defined on line 9 is matched to that, however in seconds so get your calculator out and come up with a number.

11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
// main window
var splashWindow = new UI.Window();
var text = new UI.Text({
position: new Vector2(0,0),
size: new Vector2(144,168),
text: 'Logging into Ravello Sytems, please wait...',
color:'black',
textOverflow:'wrap',
textAlign:'center',
backgroundColor:'white' 
});
 
// Add to splashWindow and show
splashWindow.add(text);
splashWindow.show();

Lines 11 through 25 simply define the first splash window we will see in the application.  Kind of a message to show the user as we are making the API calls and gathering the application lists.  You can start to see some of the Pebble object functions and parameters here…

As we move into ajax calls starting on Line 28 we can start to see the URLs and API calls to Ravello and how they are formatted when using PebbleJS.  From here each API call that is sent to Ravello is nested within the previous – this was the only way I could get this to work.   You can go ahead and read the docs on the ajax function here – I still don’t really completely understand the values being returned but hey, it works!

Anyways, back to the task at hand – As shown below lines 28-30 makes are login request, passing basic authorization and our encodedLogin variable within the header.  After parsing the response on Line 34 we display yet another splash screen (Lines 35-44) with a success.

27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
//login to Ravello
ajax({ url: 'https://cloud.ravellosystems.com/api/v1/login',method: 'post',
headers: {Authorization: "Basic " + encodedlogin,
Accept: "application/json"}
},
function(data,status,obj) {
// success into Ravello
var contents = JSON.parse(data);
var text2 = new UI.Text({
position: new Vector2(0,0),
size: new Vector2(144,168),
text: 'Hello ' + contents.name + ', you are now logged in! - Fetching applications, please wait...',
color:'black',
textOverflow:'wrap',
textAlign:'center',
backgroundColor:'white' 
});
splashWindow.add(text2);

Another API call, this one to gather our application lists takes place on lines 46 and 47.  From there lines 51 through 74 build a menu to display the application listing, hide our previous screen, and display our newly formed menu.

46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
ajax({ url: 'https://cloud.ravellosystems.com/api/v1/applications',method: 'get',
headers: {Accept: "application/json"}
},
function(data,status,obj) 
{
// success application list
var apps = JSON.parse(data); 
var count = Object.keys(apps).length;
var menuItems = [];
var appname;
var appid;
for(var i = 0; i < count; i++) {
appname = apps[i].name;
appid = apps[i].id;
menuItems.push({
title:appname,
subtitle:appid
});
}
// Construct Application menu to show to user
var resultsMenu = new UI.Menu({
sections: [{
title: 'My Applications',
items: menuItems
}]
}); 
// Show the Menu, hide the splash
resultsMenu.show();
splashWindow.hide();

At this point we are waiting on user interaction – the user needs to select the application they want powered on.  Line 76 defines that exact event listener, triggered once the user hits the select button.

75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
// Add an action for SELECT
resultsMenu.on('select', function(e) {
console.log('Item number ' + e.itemIndex + ' was pressed!');
// this is where magic happens and we translate which item was pressed into turning on applications
var detailCard = new UI.Card
(
{title: "Starting Lab", subtitle: e.item.title }
);
detailCard.show();
detailCard.body('Setting lab power off time to ' + expirationTimeInSeconds.toString() + ' seconds...');
var ExpURL = 'https://cloud.ravellosystems.com/api/v1/applications/'+e.item.subtitle+'/setExpiration';
console.log(ExpURL);
// set expiration time for selected application
var expbody = { "expirationFromNowSeconds": + expirationTimeInSeconds };
ajax
(
{
url: ExpURL,type: "json",method: "post",headers: { Accept: "application/json" }, data: expbody
},
function(data,status,obj)
{
// success setting expiration time
detailCard.body('Setting lab power off time to ' + expirationTimeInSeconds.toString() + ' seconds...'+ 'DONE!\nPowering on lab...');
var StartURL = 'https://cloud.ravellosystems.com/api/v1/applications/'+e.item.subtitle+'/start';
ajax
(
{
url: StartURL,type: "json",method:"post",headers: {Accept: "application/json"}
},

Once an application is selected lines 78 through 84 display some status messages as to what is happening, and beginning on line 89 we start the API calls to power on the application.  First (Line 92) sets the expiration time for the selected application.  Then, line 102 sends the actual Power Up command to Ravello.

104
105
106
107
108
109
110
function(data,status,obj)
{
// success starting application
console.log("Success on start:" + status);
detailCard.body('Setting lab power off time to ' + expirationTimeInSeconds.toString() + ' seconds...'+ 'DONE!\nPowering on lab...' + 'DONE!\nLab Powered On' );
Vibe.vibrate('short');
},

Lines 108 and 109 simply display some success messages to the user and send a short vibrate command to the Pebble watch.

I’ve done my best to explain the code – it’s probably not the cleanest or best way to do all this but guess what?  I can power on a Ravello application from my watch so that’s all that matters…  Please feel free to steal all the code if you want it – or here is the complete CloudPebble project if you are in a hurry and just want to skip the copy/paste.  I’d love any feedback anyone may have on this.  For now this is where the project sits but I’d love to expand it further and integrate with more of the Ravello APIs available.  At the moment I’m happy with powering on my Ravello labs from my wrist!