Friday Shorts – JSON, Veeam, Embedded Host Client and more…

I think you’re really beautiful and I feel really warm when I’m around you and my tongue swells up. – Buddy the Elf

JSON will be the death of me

I don’t do a lot of JSON parsing but when I do I tend to get just a little frustrated.  Honestly I almost always try and retrieve API responses as XML if the option is there, but sometimes it isn’t and sorting through JSON is the only alternative!  I’m a newbie to this stuff and still am learning quite a deal about interacting with APIs using various methods – one of the many reason I follow Scott Lowe’s blog regularly.  In a post this week he introduced us to a lightweight command line tool called jq, which looks like it could most definitely help me with my JSON woes – next time I find myself staring down at a page full of squigglies (not sure of the official term for  {}) I’ll give it a go… Thanks Scott!

Embedded Host Client is awesome x 5

I haven’t had a lot of time to spend with the Embedded Host Client fling – a replacement that will allow us to do away with the giant c# installable that we have all grown to love for individual host management.  I did get to use the EHC early on, and it was not the greatest experience – but as time went on and it was released via VMware Flings it is now a lot better, in fact people are loving it!  Here’s a great article outlining 5 reasons why the Embedded Host Client is awesome!

VVOLs 4 DBs!

VVOLs is still a little unknown when it comes to adoption – I haven’t seen a lot of blogs or information out there about widespread adoption!  I do know that there is general excitement out there and definitely benefits to be had by utilizing them, however they are still a little “ripe” in IT years!  That said there’s an interesting series on the VMware blogs going on right now outlining the benefits they can provide to databases – you can check out Part 1 and Part 2 if you like…

The Expert Guide to VMware Disaster Recovery and Data Protection

Shameless plug for myself on this one – A few months ago I began work on a guide for Veeam centering around how to provide Data Protection within a VMware environment – and the result is finally live!  Note – you are going to hit a reg-wall when trying to view this – it’s the nature of the world we live in, but if you do end up checking it out I’d love to know what you think – good or bad, but mostly good!

Dumping SharePoint Integrated SSRS reports to pdf using c#

Anytime I’m working on any sort of coding project Google is really my savior – there is absolutely no way I can know how all of the different libraries and functions available inside of all the .net components, nor do I really care to.  Sometimes I’ll get lucky and find exactly what I’m looking for but in reality it’s a fury of Google searches opened up in multiple tabs that end up getting pieced together into what I call “Production Code”.    Anytime I have to piece together solutions from various sites I try and post the finished product here – mainly so I don’t have to go and find it all again the next time I do it but also I’m sure it helps the odd stranger during his Google-Fu – so with that said, and with #vDM30in30 looming over me here’s a solution to help with connecting to a SharePoint integrated reporting services instance, passing parameters to a report, and exporting the report to pdf through code-behind.

I chose to do this as a c# Windows Forms app but you could technically try and reproduce the code with some slight changes if you were to use asp.  Either way we only need two controls on the form to make it work; a ReportViewer and a Button – go ahead and drag those onto the form.


So for the setup of the ReportViewer control there are only a few properties we need to setup in order to get our connection to the SharePoint SSRS instance which are listed below

  • ProcessingMode – Set this to Remote.  This simply states that the report we will be running is located remotely
  • Then under ServerReport we need to specify a couple of options
    • ReportServerUrl – This is the URL to the SSRS Report Server.  Usually, in SharePoint integrated mode it will be the same as your site URL, with ‘_vti_bin/reportserver/’ tagged on the end.  IE, https://mysharepoint.url/sitename/_vti_bin/reportserver/
    • ReportPath – This is the path to the report you would like to process, relative to your ReportServerUrl.  So if you browse to your Report Server and keep track of the directories you click on to get to the desired rdl – this is what goes into this property – IE Reports/MyReports/ReportName.rdl

And some code…

First up we are going to be accessing a few methods and objects outside of the normal c# includes.  In order to gain access to these you will need to import the following references into your project and declare them at the top of your cs file – Use nuget it’s much easier :)

using Microsoft.Reporting.WinForms.Internal.Soap.ReportingServices2005.Execution;
using Microsoft.Reporting.WinForms;
using System.Web.Services.Protocols;
using System.IO;

And the code for the main click event on our button….

private void Button1_Click(object sender, EventArgs e)
  string FilePath = "C:\\Desired\\Path\\";
  // define and add parameteres
  Microsoft.Reporting.WinForms.ReportParameter[] reportParameterCollection = new Microsoft.Reporting.WinForms.ReportParameter[3];
  reportParameterCollection[0] = new Microsoft.Reporting.WinForms.ReportParameter();
  reportParameterCollection[0].Name = "Parameter1";
  reportParameterCollection[1] = new Microsoft.Reporting.WinForms.ReportParameter();
  reportParameterCollection[1].Name = "Parameter2";
  reportParameterCollection[2] = new Microsoft.Reporting.WinForms.ReportParameter();
  reportParameterCollection[2].Name = "Parameter3";
  //render report
  string[] streamids;
  string mimeType, encoding, extension;
  string deviceInf = "8.5in11in";
  Microsoft.Reporting.WinForms.Warning[] warnings;
  byte[] bytes = reportViewer1.ServerReport.Render("PDF", deviceInf, out mimeType, out encoding, out extension, out streamids, out warnings);
  //write file and refresh reportviewer control
  string completefilename = FilePath +  "Report.pdf";
  FileStream fs = new FileStream(completefilename, FileMode.Create, FileAccess.Write);
  fs.Write(bytes, 0, bytes.Length);

So lets break this down into manageable chunks of code…

Line 3 simply defines the path to the folder where we would like to export the report

Lines 5 through 15 define and set three parameters for our report.  Obviously if your has more/less you can adjust accordingly.  Line 16 simply calls the setParameters function of our ReportViewer component and passes our newly generated parameters.

Lines 18 through 22 define and set a few variables that we need setup in order to render our report but the real magic happens on line 23 where we call the render function of our ReportViewer and store the PDF result in an array of bytes.

Lines 26 through 29 take our bytes array and utilize a FileStream object in order to write the information to a pdf file.

I’ve included a refresh command on Line 30 even though it isn’t needed for just rendering one report.  I wanted to place this in here just in case someone may be copying the code to achieve the same thing I was trying to achieve.  Basically I had all of this code within a loop – I was looping through various strings and running my target report multiple times, each time with different parameters – in order to do this you need to call the RefreshReport() method at the end of every run in order to allow the object to completely initialize.  I even went as far as to set the visibility of the ReportViewer object false as I simply just wanted to click a button and be left with a folder full of pdfs.  So, not needed for 1 run, but if you place the code into a loop its needed.

So in essence this is the code I’ve used.  It certainly is a time saver – we used to have someone manually changing parameters and dumping these reports to PDF which as you can imagine can take quite a long time if you need to do it a few hundred times – now, it’s simply an application that they run…

Happy coding.

Ravello on my wrist – Pebble in the cloud

You can probably get the gist as to what this post might be about by the title but it does leave a little to the imagination.  For those who hate waiting for the point go ahead an watch this small video…

Before I get right into what I’ve done let me first provide a little background information as to why I’ve done this aside from just looking for “something geeky to do”.

Ravello-Systems-LogoFirst up I’ve pretty much let everyone know how much I heart Ravello Systems .  Not to go too deep but as I build up labs and environments for this blog and for other interests I really like to break things.  Why?  That’s how I learn best, breaking things, fixing them, then writing them down.  The problem is I seem to always be rebuilding or fixing before I can move onto my next project.  Ravello solves that issue for me – with Ravello I’m able to keep multiple blueprints of completely configured vSphere labs (different versions, different hardware configs) in the cloud.  When I’m feeling geeky I can simply deploy one of these as an application to either Google or Amazon and away I go.  If I break it to the point of no return it’s no biggie, I can simply redeploy!  Point and case it’s a time-saver for me!

Secondly I love to write code – it’s an odd passion of mine but it’s something I actually went to school for and never 100% pursued.  Meaning I love to write code….casually!  I couldn’t imagine dedicating my whole career to it, but having the knowledge of how to do it casually sure has helped me with almost every position I’ve held.

pebbleThirdly a little while I ago I purchased a Pebble watch.  I’m still not sure why I wanted a smartwatch but I knew if I had one I’d want it to be somewhat “open” and Pebble met those needs.  Using a service called CloudPebble and by turning on the development mode on the iPhone app I’m able to deploy custom applications to my Pebble – so that was a big seller when I was looking at watches – oh, and the fact that it’s only like $100 helps as well…

So on to the problem – I mentioned I love Ravello and have multiple applications setup within the service.  The applications are great, however it takes a good amount of time after powering one on before you are able to start using it.  Those vSphere services need time to initialize and boot.  My usual routine involves me logging into Ravello and powering on what I might need for the night before I leave work.  That way the initialization can happen during my commute, supper with my family, and bedtime routines and is ready to go when I am.  There are times though when I get half way home and realize I forgot to power on my labs, or I’m not near a computer and can’t be bothered to use the small iPhone screen.

There’s an app for that!

PowerOnRavelloFor these reasons I decided to try and figure out the Ravello APIs and the Pebble SDK and see if it was possible to create a small application to simply login into Ravello, select an existing application, and power it on!  It sounds simple enough but took a lot of trial and error – I had no clue what I was doing but in the end I was left with the solution below – and it works so I guess you could call it a success.


There’s a few pieces that need to fall into place before any of this will work.  First up you wiill need a CloudPebble account.  CloudPebble is a development environment that allows us to write applications for use on the Pebble watch in either JavaScript or C.  You can use an existing Pebble account to log into CloudPebble or simply setup a new account – either way you need one and it’s free!

devmodeSecondly you will need to enable developer connections within the Pebble app on your phone.  This is easlily done by selecting ‘Developer’ within the main menu and sliding the switcher over.  Honestly, it’s a phone app I’m sure you can figure it out.

Thirdly lets go ahead and setup a project within CloudPebble You can do this by simply importing mine, or manually by giving your new project a name and select PebbleJS as your Project Type.  Once created you should be at a screen similar to that shown below…


As you can see we have one source file (app.js).  This is the only source file we will need for this project.  If you imported my project you are done for now, but if you created a new project manually this file will be full of a bunch of example code on how to perform various functions and respond to different events within the Pebble interface – we won’t need any of this so go ahead and delete all the code within the file, but not the file itself.  We will replace it with all of this syntax –  explained in the next section.

The code

If you simply just want all the code to go through on your own go head and get that here.  For the rest of us I’ll try and explain the different blocks of code below…

// import required libraries
var UI = require('ui');
var ajax = require('ajax');
var Vector2 = require('vector2');
var Vibe = require('ui/vibe');

Lines 1 through 5 simply deal with importing the libraries we will be working with – UI will give us access to the Pebble UI, Ajax is what we will use for the Ravello API calls, Vector2 for use with positioning on items on the watch, and Vibe is simply so we can access the vibration features of the watch.

// setup authentication information
var encodedLogin = "mybiglongencodedstring";
var expirationTimeInSeconds = 600; // timeout for app

Lines 8 and 9 set up a couple of variables for the application.  First up, encodedLogin represents a base64 encoded string of the username and password you use to login to Ravello, with a “:” between them.  You can grab this by heading to and grabbing the encoded string using UTF-8 as the output – just don’t forget to place the : between (ie. I encoded “”).  Copy the result and place assign it to the encodedLogin variable on Line 8

Line 9 deals with our expiration time – When we power on an application within Ravello we need to specify an auto power off parameter which states how long we want before the application powers itself down.  You don’t want to use up all those valuable CPU hours right?  The variable defined on line 9 is matched to that, however in seconds so get your calculator out and come up with a number.

// main window
var splashWindow = new UI.Window();
var text = new UI.Text({
position: new Vector2(0,0),
size: new Vector2(144,168),
text: 'Logging into Ravello Sytems, please wait...',
// Add to splashWindow and show

Lines 11 through 25 simply define the first splash window we will see in the application.  Kind of a message to show the user as we are making the API calls and gathering the application lists.  You can start to see some of the Pebble object functions and parameters here…

As we move into ajax calls starting on Line 28 we can start to see the URLs and API calls to Ravello and how they are formatted when using PebbleJS.  From here each API call that is sent to Ravello is nested within the previous – this was the only way I could get this to work.   You can go ahead and read the docs on the ajax function here – I still don’t really completely understand the values being returned but hey, it works!

Anyways, back to the task at hand – As shown below lines 28-30 makes are login request, passing basic authorization and our encodedLogin variable within the header.  After parsing the response on Line 34 we display yet another splash screen (Lines 35-44) with a success.

//login to Ravello
ajax({ url: '',method: 'post',
headers: {Authorization: "Basic " + encodedlogin,
Accept: "application/json"}
function(data,status,obj) {
// success into Ravello
var contents = JSON.parse(data);
var text2 = new UI.Text({
position: new Vector2(0,0),
size: new Vector2(144,168),
text: 'Hello ' + + ', you are now logged in! - Fetching applications, please wait...',

Another API call, this one to gather our application lists takes place on lines 46 and 47.  From there lines 51 through 74 build a menu to display the application listing, hide our previous screen, and display our newly formed menu.

ajax({ url: '',method: 'get',
headers: {Accept: "application/json"}
// success application list
var apps = JSON.parse(data); 
var count = Object.keys(apps).length;
var menuItems = [];
var appname;
var appid;
for(var i = 0; i < count; i++) {
appname = apps[i].name;
appid = apps[i].id;
// Construct Application menu to show to user
var resultsMenu = new UI.Menu({
sections: [{
title: 'My Applications',
items: menuItems
// Show the Menu, hide the splash;

At this point we are waiting on user interaction – the user needs to select the application they want powered on.  Line 76 defines that exact event listener, triggered once the user hits the select button.

// Add an action for SELECT
resultsMenu.on('select', function(e) {
console.log('Item number ' + e.itemIndex + ' was pressed!');
// this is where magic happens and we translate which item was pressed into turning on applications
var detailCard = new UI.Card
{title: "Starting Lab", subtitle: e.item.title }
detailCard.body('Setting lab power off time to ' + expirationTimeInSeconds.toString() + ' seconds...');
var ExpURL = ''+e.item.subtitle+'/setExpiration';
// set expiration time for selected application
var expbody = { "expirationFromNowSeconds": + expirationTimeInSeconds };
url: ExpURL,type: "json",method: "post",headers: { Accept: "application/json" }, data: expbody
// success setting expiration time
detailCard.body('Setting lab power off time to ' + expirationTimeInSeconds.toString() + ' seconds...'+ 'DONE!\nPowering on lab...');
var StartURL = ''+e.item.subtitle+'/start';
url: StartURL,type: "json",method:"post",headers: {Accept: "application/json"}

Once an application is selected lines 78 through 84 display some status messages as to what is happening, and beginning on line 89 we start the API calls to power on the application.  First (Line 92) sets the expiration time for the selected application.  Then, line 102 sends the actual Power Up command to Ravello.

// success starting application
console.log("Success on start:" + status);
detailCard.body('Setting lab power off time to ' + expirationTimeInSeconds.toString() + ' seconds...'+ 'DONE!\nPowering on lab...' + 'DONE!\nLab Powered On' );

Lines 108 and 109 simply display some success messages to the user and send a short vibrate command to the Pebble watch.

I’ve done my best to explain the code – it’s probably not the cleanest or best way to do all this but guess what?  I can power on a Ravello application from my watch so that’s all that matters…  Please feel free to steal all the code if you want it – or here is the complete CloudPebble project if you are in a hurry and just want to skip the copy/paste.  I’d love any feedback anyone may have on this.  For now this is where the project sits but I’d love to expand it further and integrate with more of the Ravello APIs available.  At the moment I’m happy with powering on my Ravello labs from my wrist!

Learning 3PAR – Part 2 – Moar Chunklets

chicletsIn Part 1 we went through some of the common terminology within the HP 3PAR array and now we will go into a bit more detail about one of them – the Chunklet!  A Chunklet is a key player in how the 3PAR aims to utilize all of the disks within the array, and in turn maximize the performance and protection that they can get out of the array!  With that said I mentioned that during the initialization of a physical disk it is divided up into 1GB Chunklets but what I didn’t mention is there are a few different types of Chunklets within the HP 3PAR – Now these may not be “official” HP names as I kind of named them myself during my reading.  And for some reason I’m now craving gum :)

Normal Used Chunklets

These are the Chunklets that are utilized by Logical Disks.  They are stringed together within different RAID sets across different physical disks in order to provide capacity to a CPG, which in turn passes it along to a Virtual Volume (this is essentially our datastore when it’s all said and done).  These chunklets hold our production data

Normal Reserved Chunklets (Logging Chunklets)

I don’t know if these really exist but this is what I’m going to call them.  They are pretty much the same as Normal Used Chunklets however they have been pre-configured in  reserved Logical Disks which are created by the system.  We normally see a reserved Logical Disk for Logging (used for disk failures/rebuilds), admin (used to store event logs and administration information) and srdata (used to store historical stats and information).  We will often see these logical disks containing chunklets closer to the end of the spindles as well.

Normal Unused (Free) Chunklets

These Chunklets are exactly how they are described – they are Chunklets that are provisioned, and are NOT spares, but have not yet been claimed by any Logical Disk.  It’s pretty safe to say that during installation all chunklets (except designated spares and reserved) are essentially free chunklets until you start provisioning LUNs.

Spare Chunklets

Some Chunklets will be designated as spares during the initialization of the 3PAR.  Meaning, not all 1GB Chunklets are available to be used within a Logical Disk.  Spare Chunklets are essentially a placeholder which is utilized when we have a physical disk failure and the Logical Disk RAID set needs to be rebuilt.  An intelligent note here – the system automagically selects which Chunklets are to be assigned as spares, however it does it in a way that most of the spare chunklets are located as close to the end of the physical disks block space as possible, leaving the closer blocks for production.

Chunklet Relationships

Everything just seems silly with the word chunklet in front of it :)  Either way there are few terms that are used to describe the relationships between our Normal Used Chunklets and all other chunklets within the system.

  • Local Spare Chunklets –  This would be a chunklet designated as a spare, whose primary path is connected to the same node that owns the source logical disk containing the used chunklet.
  • Local Free Chunklet – An Unused/Free chunklet whose primary path is connected to the same node that owns the source logical disk containing the used chunklet.
  • Remote Spare Chunklet – A spare chunklet whose primary path is  connected to a node different then the node owning the source logical disk containing the used chunklet.
  • Remote Free Chunklet – A free/unused chunklet whose primary path is connected to a node different then the node owning the source logical disk containing the used chunklet.

So we have mentioned failing physical disks a couple of times so I think now would be a good time to discuss what exactly happens during a disk failure and how it affects our Chunklets…

  • When a connection is lost or a failure of a physical disk occurs, the system immediately forwards all writes destined for failed chunklets that have been cached to chunklets contained in the reserved Logging Logical Disk.  This occurs until the failed physical disk/chunklets comes back online, until the Logging LD becomes full, or until the rebuild process has been completed.
  • The rebuild process occurs concurrently with the above step, where the system begins to reconstruct lost data utilizing the remaining chunklets and RAID levels provided.
    • There is some logic that happens during this rebuild/relocation phase as well – the system first looks to select a local spare chunklet , if none are to be found it moves on to a local free chunklet , then a remote spare chunklet, and finally a remote free chunklet.  All the while trying to maintain consistency in terms of the characteristics between the failed and target chunklets (Speed, Drive Type, etc).
  • Once the rebuild has completed, the logging disks are replayed and data flushed back down to the newly constructed volume.

So in the end these little tiny 1GB chunks of contiguous space are a key player in the 3PAR array.   To help understand them I tend to try to remove the fact that they are on individual drives, and think of them somewhat as really small, granular 1GB drives, some marked as spares, some in different logical drives with different raid sets, and some set aside to provide functionality for the array.  All that said though they are not separate drives, different chunklets live on the same drive, leaving us with the ability to provide different RAID levels on the same drive, mix and match different sized drives without wasting capacity, and stripe our logical disks across multiple shelves, and in some cases, even provide shelf-level protection.   Plus, they make for a nice little visualization of coloured blocks within the 3PAR Management Console :)

Veeam v9 – What we know so far…

veeamlogoJust as I did last year during VeeamON in regard to Veeam Backup and Replication v8 I thought I would throw a post out there about some of the new features of version 9 – Understandably this list can and probably will change, maybe new features are added or existing features slightly change – either way these are the features that I’ve heard about thus far – if I’m wrong or missing any please let me know and I’ll update accordingly

Unlimited Scale-out Backup Repository

This is perhaps one the biggest features included within v9 – all to often we see environments over provision the storage for their backup repositories – you never know when we might get a large delta or incremental and the last thing we want to have to do is go through the process of running out of space and having to provision more.  In the end we are left with a ton of unused and wasted capacity, and when we need more instead of utilizing what we have we simply buy more – not efficient in terms of capacity and budget management.  This is a problem that Veeam is looking to solve in v9 with their Unlimited Scale-out Backup Repository functionality.  In a nutshell the scale-out backup repo will take all of those individual backup repositories you have now and group them into a single entity or pool of storage.  From there, we can simply select this global pool of storage as our target rather than an individual repository.  Veeam can then chose the best location to place your backup files within the pool depending on the functionalities and user-defined roles each member of the pool is assigned.  In an essence it’s a software defined storage play, only targeted at backup repositories – gone are the days of worrying about which repository to assign to which job – everybody in the pool! Smile

More Snapshot/Repository integration.

Backup and restore from storage snapshots is no doubt a more efficient way to process your backups.  Just as Veeam has added support for HP 3PAR/StorVirtual and NetApp, we are now seeing EMC Dell thrown into that mix.  As of v9 we will now be able to leverage storage snapshots on EMC VNX/VNXe arrays to process our backup and restores directly from Veeam Backup and Replication – minimizing impact on our production storage and allowing us to keep more restore points, processing them faster and truly providing us with the ability to have < 15 minutes RTPO.

On the repository end of things we’ve seen the integration provided for DataDomain and Exagrid – as of v9 we can throw HP StoreOnce Catalyst into that mix. Having a tighter integration between Veeam and the StoreOnce deduplication appliance provides a number of enhancements in terms of performance to your backups and restores.  First off you will see efficiencies in copying data over slower links due to the source side deduplication that StoreOnce provides.  StoreOnce can also create synthetic full backups by performing only meta data operations, eliminating the need to actual perform a copy of the data during the synthetic creation, which in turns provides efficiency to a very high i/o intensive operation.  And of course, creating repositories for Veeam backups on the StoreOnce Catalyst can be done directly from within Veeam Backup & Replication, without the need to jump into separate management tools or UIs.

Cloud connect replication

Last year Veeam announced the Cloud Connect program which essentially allows partners to become somewhat of a service provider for their customers looking to ship their Veeam backups offsite.  Well, it’s 2015 now and we now can see that the same type of cloud connect technology now is available for replication.  Shipping backups offsite was a great feature, but honestly, being able to provide customers with a simple way to replicate their VMs offsite is ground breaking.  Disaster Recovery is a process and technology that is simply out of reach for a lot of business – there isn’t the budget set aside for a secondary site, let alone extra hardware sitting at that site essentially doing nothing.  Now customers are able to simply leverage a Veeam Cloud/Service Provider and replicate their VMs on a subscription based process to their data center.


When VMware introduced the VMware API’s for Data Protection (VADP) it was ground breaking in what it provided vendors such as Veeam the ability to do in terms of backup  VADP is the grounds to how Veeam accesses data in their Direct SAN transport mode, allowing data to be simply transferred directly from the SAN to the Veeam Backup and Replication console.  That said VADP is only supported on block transports, limiting Direct SAN to just iSCSI and Fibre Channel.  In true Veeam fashion when they see an opportunity to innovate and develop functionality where it may be lacking they do so.  As of v9 we will now be able to leverage Direct SAN mode on our NFS arrays using a technology called DirectNFS.  DirectNFS will allow the VBR console server to directly mount to our NFS exports, allowing Veeam to process the data directly from the SAN, leaving the ESXi hosts to do what they do best – run production!

On-Demand Sandbox for Storage Snapshots

The opportunities that vPower and Virtual Labs have brought to organizations has been endless. Having the ability to spin up exact duplicates of our production environments, running them directly from our deduplicated backup files has solved many issues around patch testing, application upgrades, etc.  That said up until now we could only use backup files as the grounds for getting access to these VMs – starting with v9 we can now leverage storage snapshots on supported arrays (HP, EMC, NetApp) to create completely isolated copies of the data that resides on them.  This is huge for those organizations that leverage Virtual Labs frequently to perform testing of code or training.  Instead of waiting for backups to occur we could technically have a completely isolated testing sandbox spun up using Storage Snapshots in essentially, minutes.  A very awesome feature in my opinion.

ROBO Enhancements

Those customers who currently use Veeam and have multiple locations we will be happy to hear about some of the enhancements that v9 has centering around Remote/Branch Offices.  A typical configuration in deploying Veeam is to have a centralized console controlling the backups at all of our remote locations.  In v8, even if you had a remote proxy and repository located at the remote office, all the guest interaction traffic was forced to traverse your WAN as it was communicated directly from the centralized console.  In v9 things have changed – a new Guest Interaction Proxy can be deployed which will handle then handle this type of traffic.  When placed at the remote location, only simple commands will be sent across the WAN from the centralized console to the new GIP, which will in turn facilitate the backup of the remote VMs, thus saving on bandwidth and providing more room for, oh, I don’t know, this little thing called production.

When it comes to recovery things have also drastically changed.  In v8 when we performed a file-level recovery the data actually had to traverse our WAN twice – once when the centralized backup console pulled the data, then again as it pushed it back out to it’s remote target – not ideal by any means.  Within v9 we can now designate and remote Windows server as a mount server for that remote location – when a File-level recovery is initiated the Mount Server can now handle the processing of the files rather than the backup console, saving again on bandwidth and time.

Standalone Console

“Veeam Backup & Replication console is already running”  <- Any true Veeam end-user is sure to have seen this message at one time or another, forcing us to either find and kill the process or yell at someone to log off Smile  As of v9 the Veeam Backup & Replication console has now been broken out from the Veeam Backup & Replication server, meaning we can install a client on our laptops in order to access Veeam.  This is not a technical change in nature, but honestly this is one of my favorite v9 features.  I have a lot of VBR consoles and am just sick of having all those RDP sessions open – this alone is enough to force me to upgrade to VBR v9 Smile

Per-VM backup files

The way Veeam is storing our backup files is coming with another option in version 9.  Instead of having one large backup file that contains multiple VMs we can now enable what is called a “Per-VM backup file chain” option.  What this does store each VMs restore points within the job in their own dedicated backup file.  Some advantages to this?  Think about writing multiple streams inside of parallel processing mode into our repositories – this technically should increase the performance of our backup jobs.  Certainly this sounds like an option you may only want to use if your repository provides deduplication as you would lose the deduplication provided job-wide by Veeam if you have enabled this.

New and improved Explorers

The Veeam Explorers are awesome, allowing us to restore individual application objects from our backup files depending on what application is inside it.  Well, with v9 we have one new explorer as well as some great improvements to the existing ones.

  • Veeam Explorer for Oracle – new in v9 is the explorer functionality for Oracle.  Transaction-level recovery and transaction log backup and replay are just a couple of the innovative features that we can no perform on our Oracle databases.
  • Veeam Explorer for MS Exchange – We can now get a detailed export report which will outline exactly what has been exported from our Exchange servers – great for auditing and reporting purposes for sure!  Another small but great feature – Veeam will no provide us with an estimation in terms of export size for the data contained in our search queries.  At least we will have some idea as to how long it might take.
  • Veeam Explorer for Active Directory – Aside from Users and Groups and the normal objects in AD we might want to restore we can now process GPO’s and AD-Integrated DNS Records).  Oh, and if you know what you are doing Veeam v9 can also restore configuration partition objects (I’ll stay away from this one Smile)
  • Veeam Explorer for MS SQL – One big item that has been missing from the SQL explorer has been table-level recovery – in v9 this is now possible.  Also in v9 is the ability to process even more SQL objects such as Stored Procedures, functions and views as well as utilize a remote SQL server as a staging server for the restore.
  • Veeam Explorer for SharePoint – As much as I hate it SharePoint is still widely used, therefore we are still seeing development within Veeam on their explorer.  In v9 we can process and restore full sites as well as site-collections.  Also, list and item-level permissions are now possible to restore as well.

There are a few more enhancements and features but honestly I can’t write them all down – we will just have to wait to see for ourselves!  Veeam Backup & Replication version 9 is slotted to be released sometime later this year – so we won’t have to wait long!

Learning 3PAR – Part 1 – Chunklets, Logical Disk, CPGs, and Virtual Volumes

3parAs I’m currently in the beginning phases of a HP 3PAR deployment I thought it might be a good idea to write a few posts centering around some of the concepts built around the 3PAR architecture.  For the most part I can relate the different terminology names to other storage arrays I’ve used in the past but some of them are somewhat new to me as well.  Either way I’m no expert and am still learning myself so ease up on me if I make a mistake eh!  Anyways, for the first part of this series I’ll concentrate simply on some of the terminology and layers that exist within the 3PAR StorServ and try to explain them the best I can – remember, I’m explaining them to me as well!

5 Layers to the hosts

As with any array the path that data takes to get from our hosts to its’ final destination on disk is a complex one – but thankfully we don’t have to worry about all of the bumps in the road along the way.  That said it’s always nice to understand the road as best we can in order to determine how best practices and configuration changes will apply to our environment.  With the 3PAR that path contains 5 essential layers; Virtual Volumes, Common Provisioning Groups, Logical Disks, Chunklets, and Physical Disks.



We can somewhat see by the diagram the relationship between each layer but before taking a holistic view let’s first discuss each layer…

Physical Disks

This is an easy one right?  A physical disk is just that, a physical disk located inside of your 3PAR array, encompassing all types of disk within the array.


The first thing a 3PAR does when it is discovering its’ storage is break down all of the capacity on your physical disks into chunklets.  Each chunklet is 1GB in size and occupies contiguous space on a physical disk.  Chunklets are local to that physical disk only and cannot span to others.

Logical Disks

Logical disks are essentially a grouping of chunklets which are arranged as rows of like RAID sets. LD’s will ensure that each chunklet which resides in a RAID set is physically located on different physical disks.  We don’t directly create LD’s on the 3PAR – they are generated during the creation of a CPG (explained next), more-so, when a Virtual Volume is created on a CPG.   All of the metadata however, RAID type, allocation, growth of an LD is defined when creating the CPG itself.

Common Provisioning Groups (CPG)

A CPG is simply a pool of Logical Disks that provide the means for a Virtual Volume (explained next) to consume space.  When we deploy a CPG we do not actually use any of the space in our pooled logical disks until a virtual volume is created – meaning a 2TB CPG with no virtual volumes consumes no space at all.  We can think of a CPG similar to that of an EVA’s disk group, but feeding on logical disks instead of physical disks.

Virtual Volumes

No, these aren’t the VVOLs your looking for – this is simply a terminology that 3PAR uses to define the LUNs that are presented to the hosts – they are not the VVOLs which we have all seen come supported in vSphere 6.  Either way a Virtual Volume is a LUN that draws it’s capacity from a CPG – one CPG can provide space to many virtual volumes.  A virtual volume is the LUN that is exported out to your ESXi hosts, and eventually hosts datastores.  Just like most arrays Virtual Volumes can be provisioned either thick or thin – with a thin provisioned Virtual Volume only instructing its associated CPG to draw space from the logical disks as space is needed.  CPGs have the ability to create logical disks as needed to handle the increased demand for capacity up until the user-defined size limit of the CPG is reached.

So working backwards we can come to somewhat of the following

  • A datastore is located on a Virtual Volume
  • A Virtual Volume draws its’ space from a Common Provisioning Group (CPG).
  • A Common Provisioning Group is any given number of Logical Disks joined together to form some sort of contiguous space.
  • A Logical Disk is simply a collection of chunklets which are joined together in rows in order to produce a certain RAID set (1,5,6,etc).
  • A Chunklet is a 1GB piece (chunk) of any given physical disk within the array.  It’s also a very funny word.
  • A physical disk is…well, a physical disk.

So there we have it – it being the very very very basic understanding of some of the terminology within the HP 3PAR.  Certainly we can dive deeper into some of these terms here and we will in later posts – I mean, there are many different types of Chunklets, some reserved, some spare, but we will save those and some other terms such as Adaptive Optimization for another post (mainly because I have no idea quite yet Smile).