What’s in a name? – The importance of naming conventions

namebookAll to often I find myself in the following situation – I’m in the midst of deploying a new service or project into our environment.  I have gone through tons of user manuals and articles, went through weeks of training and technical tutorials, successfully completed proof of concepts and pilots, yet as I’m beginning the production deployment I sit at my desk puzzled, staring into space in deep thought, perplexed by the multitude of options running through my mind over what I am going to name this server.  In the Windows world, those 15 simple characters allowed in the NetBIOS name puts my mind in such a tailspin – sure, we have our own naming conventions for servers, first three digits are dedicated to a location code, followed by a dash, then whatever, then ending in a “-01” and increment – so that leaves me now with 8 characters to describe the whatever part that I’m deploying, and in some instances, it’s those 8 simple characters that tend to be the most difficult decision in the project – even with somewhat of a server and endpoint naming convention in place.

But it goes beyond just servers and endpoints.

Sure, most companies have naming processes in place for their servers and workstations – and I’ll eventually come up with something for those 8 characters after a coffee and some arguments/advice from others.  The most recent struggles I’ve had though, apply not to servers, but to inventory items within vSphere.  There are a lot objects within vSphere that require names – Datastores, PortGroups, Switches, Folders, Resource Pools, Datacenters, Storage Profiles, Datastore Clusters…..I can go on for quite some time but you get the point.  All of these things require some sort of a name, and with that name comes a great deal of importance in terms of support.  For outsiders looking in on your environment and their understanding of what something is, as well as for your own “well-being” when troubleshooting issues – a name can definitely make or break your day.

So how do we come up with a naming convention?

boy-61171_640This sucks but there is no right or wrong answer for your environment – it’s your environment and you need to do whatever makes sense for you!  Unlike naming children we can’t simply pick up a book called “1001 names for vSphere Inventory Items” and say them out loud with our spouses till we find one that works – we need something descriptive, something we can use when troubleshooting to quickly identify what and where the object is.

Back to my recent struggle – it had to do with datastores.  For a long time I’ve had datastores simply defined as arrayname-vmfs01 and increment.  So, vnx-vmfs01, vnx-vmfs02, eva-vmfs01, etc…  This has always been fine for us – we are small enough that I can pretty much remember what is what.  That said, after adding more arrays and  a mixture of disk types (FC, SAS, NLSAS, SATA) I began to see a problem.  Does eva-vmfs01 sit on SAS or SATA disks?  Is it on the 6400 or 4400?  Is it in the primary or secondary datacenter?  Is this production or test storage?  Does it house VMs or backups?  What is the LUN ID of eva-vmfs01?  I know – most of the questions can be answered by running some CLI commands, clicking around within the vSphere client or performing a little more investigation – but why put ourselves through this when we can simply answer these questions within the objects name?

So I set out to Twitter, asking if anyone had any ideas in regards to naming conventions for datastores – and I got a ton of responses, all with some great suggestions, and all different!  So to sum it all up, here are a few suggestions of what to include in a datastore name that I received.

  • Array munufacturer/model/model number
  • Disk Type (SAS, FC, NLSAS, SATA, SCSI)
  • Lun Identifier/Volume Number
  • Destination Type (Backups, VMs)
  • Storage Tiers (Gold, Silver Bronze/Tier 1, Tier 2, Tier3)
  • Transport type (iSCSI, NFS, FC, FCoE)
  • Location/Datacenter
  • Raid Levels
  • vSphere Cluster the datastore belongs to
  • Of course, Description

Yeah, that’s a crap-load of information to put in a name – and maybe it makes sense to you to use it all but in my case it’s just too much.  That said, it’s all great information, the toughest part is just picking the pieces you need and arranging them in an order that you like.  Having something named DC1-TOR-HP-EVA-6400-FIBRE-FC-R6-VM-GOLD-CLUSTER1-L15 is a little much.

And in the end

So what did I chose?  Well, to tell you the truth I haven’t chose anything yet.  I thought by writing this post I might spark some creative thinking and it would pop into my head – but no – I’m more confused than ever.  Honestly, I’m leaning towards something like array-disktype-lunid, like EVA-FIBRE-L6 or VNX-NLSAS-L4, but I just don’t know.  This stuff is certainly not a deal breaker, but I’m just trying to make my life a bit easier.  If you think you have the end all be all of naming conventions feel free to leave a comment!  I’m always open for suggestions!

#VFD4 – A Canadian in Texas?

VFD-Logo-400x398I know, I didn’t leave much to the imagination with the blog title and as you may of guessed I’m going to be attending Virtualization Field Day 4 in Austin, Texas this January!

I was ecstatic when I received the invitation and it didn’t take much convincing to get me to go!  I’ve been a huge fan and supporter of the Tech Field Day format over the years, and not too many of them go by where I don’t catch a few session on the livestream.  The fact that Austin is on average 30 degrees Celsius warmer than here in January sure does help too!

The event

Aside from the heat I’m definitely looking forward to being a part of VFD4.  This will be the fourth installment of Virtualization Field Day and it takes place January 14th through the 16th in Austin, Texas.  The Tech Field Day events bring vendors and bloggers/thought leaders together in a presentation/discussion style room to talk everything and anything about given products or solutions.  I’ll point you to techfieldday.com to get a way better explanation about the layout of the events.

The Delegates

This will be my first time as a delegate and I’m feeling very humbled for having been selected.  Honestly I get to sit alongside some of the brightest minds that I know.  Thus far Amit Panchel (@AmitPanchal76), Amy Manley (@WyrdGirl), James Green (@JDGreen), Julian Wood (@Julian_Wood), Justin Warren (@JPWarren), and Marco Broeken (@MBroeken) have all been confirmed as delegates with more to be announced as time ticks on.  Some of these people I’ve met before, some I know strictly from Twitter and others I haven’t met at all so I’m excited to catch up with some people as well as meet some new people.

The Sponsors

So far there have been 6 sponsors sign up for #VFD4 – Platform9, Scale Computing, Simplivity, Solarwinds, StorMagic and VMTurbo.  Just as with the delegates some of these companies I know a lot about, some I know a little, and others I certainly need to read up on.  Having seen many, and I mean many vendor presentations in my lifetime I have tremendous respect for those that sponsor and present at Tech Field Day.  The sessions tend to be very technical, very interactive, and very informative – three traits that I believe make a presentation.  I’m really looking forward to seeing my fellow Toronto VMUG Co-Leader and friend Eric Wright (@discoposse) sitting on the other side of the table :)

Be sure to follow along via Twitter by watching the #VFD4 hashtag leading up to and during the event.  Also a livestream will be setup so you too can watch as it all goes down.

I’m so grateful to everyone for getting this opportunity – so thank you to my peers, the readers of this blog, Stephen Foskett and all the organizers of all the great Tech Field Days and the virtualization community in general – See you in Texas!

Veeam on Windows 2012 R2 – Don’t forget about large size file records

veeamlogoUnfortunately I had to learn this one the hard way – but in hopes that you don’t have to, here’s the gist of my situation.  I’ve had Veeam running and backing up utilizing a Windows Server 2012 R2 box as a target for a while now!  I’m getting some awesome dedup ratios utilizing both Veeam and Windows built in deduplication.  That said, just last week I began to see this error occurring on one of my backups jobs.

“The requested operation could not be completed due to a file system limitation.  Failed to write data to the file [path_to_backup_file]”

After a bit of Google Fu one can conclude that from here and here my problems were mainly due to the way my volumes were formatted – more specifically the support of for large file records.  I, like most people went ahead and simply used the GUI to format and label all of my volumes.  The problem being, utilizing the GUI also utilizes the default settings for the format operation, which in turn support only small size file records.

Therefore, after time, after some data is laid down to the disk, after dedup has been doing it’s thing for a while you might start to see the same error I was.  The solution – well, unfortunately it is to reformat that volume with large size file records.  The command to do so, pretty simple, listed below

Format <DRIVE> /NTFS /L

The key here being /L, which specifies the support for large size file records.  Also, this process can take quite some time.  From the time I ran the command to the time I had to get to this point in this blog I’m still sitting at 2%.

In my case, I removed all the backups from Veeam that lived on this disk.  I’m comfortable that I also have replication running so I wasn’t worried about losing any data.  If you are though, you could always simple disable dedup and copy your backups files to another location, run the format command and then copy them back.  Honestly, I knew in my case it would be easier and quicker to simply just reseed those VMs.

Also, it’s not to say that Veeam won’t work without large size file records.  I have 8 other volumes on this Veeam instance which have all been formatted with the same default settings and haven’t seen any issues whatsoever with them – just this one volume is throwing the error!  For now, I plan on just leaving these other volumes the way they are.  Am I just delaying the inevitable?  Only time will tell!

Thanks for reading!

Get the cobwebs out of your Data Center with CloudPhysics

Just the other day I was thinking to myself you know what this world needs – more Halloween themed infographics relating to IT.  Thankfully, CloudPhysics, with their analytic powers pulled that thought out of my head and did just that!  With a vulnerability dubbed Heartbleed, how can you resist it?

On a serious note some of the data that CloudPhysics has should really scare you.  22% of our vSphere 5.5 servers remain vulnerable to Heatbleed! 41% of clusters do not have admission control enabled!   These are definitely some spooky stats and shouldn’t be ignored!

CloudPhysics-Halloween-2014

But what’s Halloween with out some tricks and treats right?

CloudPhysics has you covered there as well!   Be sure to grab their Halloween Cookbook – A collection of tips and tricks on how you can remediate issues within your data center and stay out of the spooky stats that they are collecting.  For an even bigger treat, be sure to sign up to allow CloudPhysics to help find the data center goblins for you – oh, for free!  Better yet, sign yourself up for the community edition of CloudPhysics – it’s completely free and can give you some great insight into what’s going on inside your virtual data center!  Be sure to head to their official blog post to get all in the information you need to fill up your bag!

Google Admin SDK APIs and .net integration – Changing a users password

I know weird right?  Coding, Google API’s, WTF!?!?  This is a virtualization blog.  Truth is I was a developer before ever moving into the infrastructure space, not much of one, but I was one Smile  Honestly, this is probably the reason why products like vRealize Orchestrator ( More weirdness calling vCO that) that mix both development and infrastructure together appeal to me so much!  More truth – as much as I try I can’t quite get away from development – it’s a part of what I do ( I guess ).

Anyways, cut to the chase – I’ve been recently working on a project revolving around integrating Google Apps with a homegrown .net application I’ve written.  Now, there is such a thing called the provisioning API, which is super easy to use and worked great – but is depreciated and Google could simply stop providing it whenever they want.  Google suggests people move to the Admin SDK – which, of course, is much harder!  Either way, I needed to provide a way for users to reset other users Google Apps passwords from within a .net application, without those users having to provide access or permission to their accounts. My Google-Fu was strong on this one, and by the time I finally got it to work I had about 27 tabs open, therefore I thought it might be nice for the next person to maybe stumble upon one page containing all the info they need to make this happen – therefore – this!

To the developers console

The first step to getting everything to mesh with the Admin SDK involves setting up a project in the Google Developers Console.  Simply log in and select ‘New Project’ and give your project a name and an id.  Easy enough thus far.  Once the project has been created we need to enable the Google Admin SDK.  Select APIs from the APIs & auth section on the navigational menu on the left.  Here, we can simply filter our list of available APIs by inputting ‘Admin SDK’ in the browse box and then enable it by switching its status to On.

EnableAdminSDK

From here we need to create a client ID containing the p12 key and credentials that we need in order to delegate access to.  As mentioned earlier, I’ve decided to go about this via the ‘Service Account’ route as I would like to have an account that I can delegate domain wide admin access to in order to change passwords, create users, etc, and doing this without authorization or permission from the users themselves.  To do click the ‘Create new Client ID’ button inside of the Credentials section of API’s & auth.  When presented with the ‘Application Type’ dialog, select Service account and click ‘Create Client ID’.

CreateServiceAccount

Once the process has completed pay attention to the .p12 key that is automatically downloaded.  We will need this file later when connecting our .net application so go and store it somewhere safe.  Also note the private keys password in the dialog as we will also need this information.

privatekeydownloadpassword

At this point you should see your newly created service account, it’s Client ID, Email Address, and Public Key Fingerprints.  The only piece of information that we need off of this screen is the Client ID – usually a big long string ending in googleusercontent.com.  Go ahead and copy that to your clipboard as we will need it for the next step.

To the Admin Console

From here we need to go to our Google Apps admin console (admin.google.com/your_domain_name and grant this newly created security account access to specific APIs.  Once logged into the admin console, launch the security app (sometimes located in the ‘More Controls’ link near the bottom.

adminconsole-security

Inside of Security we need to go into the Advanced Settings (located by clicking Show more) and then the “Manage API client access” section.

advancedsettings

Now we need to grant our service account access to specific APIs within Google by specifying the individual URLs of the API.  First, paste your Client ID that we created and copied into the Client Name box.  Secondly, copy in all of the API urls (comma separated) that you want to grant access to.  In my case I’m only looking for User and Group Management so I entered https://www.googleapis.com/auth/admin.directory.group, https://www.googleapis.com/auth/admin.directory.user into the API Scope input box.  If you need help figuring out the url for the specific API you are looking for you can find them listed in the Authorize Request section in the developer guide for each Google API.  Once you are ready to go as shown below, click Authorize.

authorizeAPIAccess

And now the code

Thus far we’ve done all the heavy lifting as it pertains to opening up the access for .net to utilize our Admin SDK APIs – time to get into a little code!  Again, I don’t consider myself a hardcore developer, as you can probably tell from my code.  There may be better ways to do this but this is the way I found that worked, and again, not a lot of information out there on how to do this.

First up there are some project references that you need to use.  Honestly, you can get the .net client libraries from Google but the easiest way to bring packages in is by using NuGet as it will pull dependencies down for you.  Go ahead and import the Google APIs Client Library, Google APIs Auth Client Library, Google APIs Core Client Libary, and Google.Apis.Admin.Directory.directory_vi Client Library.  That should be all you need to do the password resets.

So the complete script is listed at the bottom, but for “learning” sake, I’ll break down what I’ve done in the sections to follow.

1
2
String serviceAccountEmail = "350639441533-ss6ssn8r13dg4ahkt20asdfasdf1k0084@developer.gserviceaccount.com";
var certificate = new X509Certificate2(@"c:\p12key\NewProject-3851a658ac16.p12", "notasecret", X509KeyStorageFlags.Exportable);

Here I’m simply declaring some variables; first, the serviceAccountEmail – this is the email (not the ID) of the Service Account we have setup – secondly, our certificate, which is generated by pointing the constructor to the p12 key we generated (remember) and the key password that was displayed (remember).

1
2
3
4
5
6
7
ServiceAccountCredential credential = new ServiceAccountCredential(
new ServiceAccountCredential.Initializer(serviceAccountEmail)
{
User="myadminaccount@mwpreston.net",
 
Scopes = new[] { DirectoryService.Scope.AdminDirectoryUser }
}.FromCertificate(certificate));

This line essentially builds the credential object we need in order to instantiate the service we want to perform.    Take special note here I have to pass a user parameter – this is the user that we want our service account to impersonate (they will need to be have correct roles/permissions in Google to perform any of the tasks we attempt).  Also, the Scopes array – this is specifying which exact API scopes we want to authenticate to – these will normally match the end of the API URL, just without decimals.  That said, we have auto-complete in Visual Studio right – use it Smile

1
2
3
4
5
var dirservice = new DirectoryService(new BaseClientService.Initializer()
{
HttpClientInitializer = credential,
ApplicationName = "MyNewProject",
});

Each and every API you call from Google inside of .net will need to be stored in a service variable.  This is where we instantiate a new DirectoryService (to access users).  If you were to use the Task Service this would be new TaskService.  No matter what we always use new BaseClientSerivce.Initializer in our constructor.  Note we also pass in our created credential object, as well as the name of our project we created in Google Developer Console.

1
2
3
4
5
User user = dirservice.Users.Get("testuser@mwpreston.net").Execute();
Console.WriteLine(" email: " + user.PrimaryEmail);
Console.WriteLine(" last login: " + user.LastLoginTime);
user.Password = "MyNewPassword";
dirservice.Users.Update(user, "testuser@mwpreston.net").Execute();

And now the magic happens.  Honestly this will be different depending on what API you are using but again we have built-in help and auto-complete in Visual Studio so you should be able to figure out how to do anything you need to.  Above I simply get a user, display a few of their parameters, change their password and then update the user.  Easy enough!

So here’s the code in its’ entirety.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Security.Cryptography.X509Certificates;
 
using Google.Apis.Auth.OAuth2;
using Google.Apis.Services;
using Google.Apis.Admin.Directory.directory_v1;
using Google.Apis.Admin.Directory.directory_v1.Data;
using Google.Apis.Admin.Directory;
 
namespace ConsoleApplication3
{
class Program
{
static void Main(string[] args)
{
Console.WriteLine("Connect to Google API");
Console.WriteLine("=====================");
 
String serviceAccountEmail = "350639441533-ss6ssn8r13dg4ahkt20ubasdf2424@developer.gserviceaccount.com";
var certificate = new X509Certificate2(@"c:\p12key\NewProject-3851a658ac16.p12", "notasecret", X509KeyStorageFlags.Exportable);
 
ServiceAccountCredential credential = new ServiceAccountCredential(
new ServiceAccountCredential.Initializer(serviceAccountEmail)
{
User="MyAdminAccount@mwpreston.net",
 
Scopes = new[] { DirectoryService.Scope.AdminDirectoryUser }
}.FromCertificate(certificate));
 
 
var dirservice = new DirectoryService(new BaseClientService.Initializer()
{
HttpClientInitializer = credential,
ApplicationName = "MyNewProject",
});
 
User user = dirservice.Users.Get("testuser@mwpreston.net").Execute();
Console.WriteLine(" email: " + user.PrimaryEmail);
Console.WriteLine(" last login: " + user.LastLoginTime);
user.Password = "MyNewPassword";
dirservice.Users.Update(user, "testuser@mwpreston.net").Execute();
 
Console.WriteLine("Press any key to continue...");
Console.ReadKey();
}
}
}

I know this post may be a little off topic with the rest of this blog but the fact of the matter is I couldn’t find a lot of information out there on how to accomplish changing Google Apps user passwords from .net.  And when I can’t find information I post it!  Hopefully someone will stumble across this and find it useful!  Thanks for reading!

Veeam Cloud Connect – a customer perspective!

During the partner keynote at VeeamON it was announced that some of its’ partners will be ready to go in terms of supporting the new Veeam Cloud Connect hosting functionality that is set to be released inside of Veeam Backup and Replication v8.  In it’s basics, Veeam Cloud Connect and the Veeam Service Provider certification allows partners to become a service provider, allowing their customers to get their backups offsite and provide a full backup hosting environment.

600x315_cloud-connect_1

So, in terms of a customer what does this mean?  Many small to medium size businesses use Veeam to take care of their day to day backups and as an added bonus replicate and test their backups to ensure consistency.  That said, the shear cost of acquiring space, compute, and storage at an offsite location tends to be a hurdle that is hard to overcome.  But it’s not just that – even if budget is not a problem, the complexity of providing and setting up connectivity, networking, and routing can be a challenge to those without a large IT staff.  Backing up to a VSP certified Veeam Cloud Connect partner eliminates those challenges, allowing customers to simply rent space from their preferred partner.

From within Veeam Backup and Replication backing up to a VSP through Cloud Connect is essentially no different than if you were backing up locally.  When we setup our backup repositories we currently select either a Windows or Linux machine with some available storage and then it is available to be used within our backup jobs.  Veeam Cloud Connect is much the same – A Veeam-powered service provider will simply show up as a backup repository within your Veeam Backup and Replication console and can be used within your jobs just as any other repository.  Easy!

But it’s more than that!

HandshakeSure, Veeam Cloud Connect will allow us to easily get our backups offsite without incurring most of the major capital costs that come along with that, but it offers much more – more that centers around trust.  Mostly everyone is aware of giant cloud providers such as Amazon, Microsoft, and VMware – and the services they provide are bar none amazing but do we really trust them with our data.  The thing about granting partners with the ability to become a Veeam hosting provider brings the trust issue closer to home for the customer!  Partners are in the trenches everyday, building relationships with customers and ensuring that these are life-long commitments.  In my opinion, people tend to trust someone they have worked with often, someone they met many times over the years, and someone who is local – essentially their preferred IT partners and VARs.  By allowing their partners to become hosting providers Veeam has essentially leveraged this trust that partners have worked to obtain with their customers, and in turn, allowed the partners to provide a well rounded, complete solution to these customers.  It’s really a win-win all around.

I think about markets like education, healthcare, government – these verticals house very important and sensitive data, data that needs to be protected both onsite and off.  With that said, strict compliance guidelines usually dictate exactly how and where this data may sit.  Honestly, if that place is outside of the US the major players just aren’t there.  I can see how Veeam Cloud Connect can solve this issue.  By utilizing a preferred partner, educational institutions and districts could essentially leverage a partner who is “close to home” to provide them with that offsite repository for their data.  Someone they already know and trust.

Furthermore, why settle at just one?  There are many school districts in Ontario who may or may not leverage Veeam as a backup vendor of choice – but for those that do I can see tremendous opportunity for partners to design and implement data-centers centered directly around the compliance and regulations that the districts face.  Again, a win win – a partner gets many like customers and the customers get an easy and secure solution – without the need to purchase additional hardware or licensing.

In essence, Veeam Cloud Connect is more than just a backup repository for a customer.  It’s an opportunity.  An opportunity for partners to leverage the trust they have built, an opportunity for customers, especially those in similar to verticals to unite and look for a “groupon” type deal for offsite backups.  And an opportunity for everyone to ensure that in the event of a disaster, they are not left without the critical data that their business needs to survive!