Category Archives: Uncategorized

Friday Shorts – Certs, Tools, Loads, VVOLs and #SFD7

It’s been quite a long time since my last “Friday Shorts” installment and the links are certainly piling up!  So, without further ado here’s a few tidbits of information that I shared over the last little while…

A little bit of certification news!

VMware LogoVMware education and certification has certainly taken it’s fair share of backlash in the last few months, and honestly it’s rightly deserved!  People don’t like when they invest in a certification, both in money and time, just to have an expiry date placed on all their efforts!  Either way, that’s old news and nothing is changing there.  What I was most concerned about was whether or not I would be able to skip my upgrade of my VCP and just take a VCAP exam instead, which would in turn re-up my VCP.  Then the announcement of no more VCAP was made – which through those questions of mine for a loop – but now, after this announcement it appears that their will be an upgrade/migration path for those current VCAP holders to work towards the newly minted VCIX.  Have a read and figure out where you fit in and start planning.   I already hold a VCAP5-DCA so by taking the design portion of the VCIX I would be able to earn my VCIX certification in full – sounds good to me!  Now we just need the flipping exams blueprints to come out so we all can get to studying! :)

New version of RVTools!

rvtoolsYup, the most famous peice of “nice to haveware” has an updated version.  I’ve used RVTools for quite some time now – as an administrator any piece of free software that I can get to help me with my job is gold!  RVTools saves me a ton of time when gathering information as it pertains to my virtual environment and my VMs.  If you haven’t used it definitely check it out – if you have, upgrade – you can see all of the new changes and download here!!

KEMP giving away LoadMaster!

kempKeeping on the topic of free tools let’s talk about KEMP for a moment!  They are now offering their flagship KEMP LoadMaster with a free tier!  If you need any load balancing done at all I would definitely check this out!  Now, there is going to be some limitations right, nothing in this world is completely free :)  Certainly it’s only community supported and you can only balance up to a maximum of 20 MB/s – but hey, may be a great solution for your lab!  Eric Shanks has a great introduction to how to get it up and going on his blog so if you need a hand check it out!  I’ve also done up a quick review a few months back on load balancing your LogInsight installation with KEMP.  Anyways, if you are interested in checking it go and get yourself a copy!

You got your snapshot in my VVOL!

As my mind wanders during the tail end of the NHL season I often find my mind racing about different things during the commercial breaks of Habs games – this time I said to myself, self, do snapshots work the same when utilizing the new VVOL technology.  Then myself replied and it said, hey self, you know who would know this answer, Cormac Hogan.  A quick look at his blog and low and behold there it was, a post in regards to shapshots and VVOLs.  If you have some time check it out – Cormac has  a great  way of laying things out in quick and easy to follow blog posts and this on is no exception.  In fact, before the first place team in the eastern conference returned from the tv timeout I had a complete understanding of it – now, back to our regularly scheduled programming.

 #SFD7 – Did you see it?

SFD-Logo2-150x150It appears that most if not all the videos from Storage Field Day 7 have been uploaded from the Silicon Valley internets into the wide world of YouTube!  There was a great list of delegates, vendors and presenters there so I would definitely recommend you check them out!  There were crazy hard drive watches, fire alarms, and best of all, a ton of great tech being talked about!  IMO the show could of done with just a few more memes though :)  With that said you can find all their is to know about Storage Field Day 7 over at GestaltIT’s landing page!

Rock the vote! Top vBlog voting now open!

2014_Award-Banner_Top-25-smallIt’s that special time of year again – a time for the virtualization community to come together and vote for their favorite virtualization blogs.  Yes – the Top vBlog Voting for 2015 is underway over at  As much as this is just simply a ranking of blogs I’m not going to lie – it feels great to be recognized for all the work that I put into this blog and I appreciate each and every vote and reader that I have here on  This will be my forth year participating in the Top vBlog voting and honestly I’m so humbled by the way things have turned out.  In 2012 I put myself out there in the contest and came in at #125, 2013 I moved up to a whopping #39, and last year, 2014 I landed in spot #20 (wow!)  Thank you all for the support!

That’s one small step for man, one giant…I have a dream!

black-sheep-1996-02-gI know the sub title above doesn’t make much sense but wanted to somehow sneak a picture of Farley into this post, so there’s that!  – Seriously though, if you are a reader of this blog, or any blog on the vLaunchpad for that matter be sure to get over to the survey and vote!  Help pay respects and give recognition to the bloggers that spend countless hours trying to bring you relevant and useful information.  Be sure to read this post by Eric Siebert outlining a few tips and things to keep in mind while voting.  This isn’t a popularity contest – vote for the blogs you feel are the best – and if you aren’t sure, take a look back at some of the content they’ve produced over the past year.  Eric has links and feeds to over 400 blogs (insane!) on the launchpad if you have a spare 3 or 4 days :)

Speaking of Eric

Don’t forget to give huge thanks and props out to Eric for the time that he spends putting this thing together.  I can’t imagine the amount of work that goes into maintaining something like this.  Honestly I don’t know how he keeps up with it all, the linking, etc.  I have a hard enough time going back through my drafts and creating hyperlinks :)  So props to you Eric and Thank You!  Also, reach out to the wonderful folks at Infinio and thank them for once again sponsoring the Top vBlog Voting!  A lot of what goes on within the community wouldn’t be possible without sponsorships and help from all of the great vendors out there!

You have until March 16

That’s right, this whole thing wraps up on March 16 so make sure you get your choices in before then.  You will find mwpreston dot net front and center on the top of your screen once you start the survey (just in case you are looking for it :)).  Obviously I’d appreciate a vote but be true to yourself, if you don’t think I deserve it, skip me and move on to someone you think does :)

mwpreston dot net vote

I tend to use the Top vBlog Voting as a time to reflect back on what I’ve accomplished over the last year and 2014 was a super one for me!  I had the chance to attend a couple new conferences –  VeeamON, and Virtualization Field Day 4 – all of which I tried my best to cover on this blog.  I’ve also been doing a lot of writing for which has been a blast (if you are looking for a best news blog vote, check them out).  No matter where I end up it’s simply an honor to be part of this community and to have made so many new friends from across the world!  So here’s to an even better 2015

Tech Field Day – #VFD4 – VMTurbo Putting a price on resources!

VMTurboLogoSmVMTurbo closed off the first day at VFD4 in Austin, Texas with an overview and deep dive into their flagship product Operations Manager.  This was one of the presentations that I was most looking forward to as my fellow Toronto VMUG Co-Leader, fellow Canadian and good friend Eric Wright was involved in it, and for the first time I got to see Eric on the “other side of the table” speaking for a vendor.

 Disclaimer: As a Virtualization Field Day 4 delegate all of my flight, travel, accommodations, eats, and drinks are paid for.  However I do not receive any compensation nor am I required to write anything in regards to the event or the sponsors.  This is done at my own discretion.

Demand-Driven Control for the Software-Defined Universe

Eric started off by prompting everyone’s thoughts around what exactly Operations Manager is– not by talking about the product or what it can do, but by briefly explaining a motto that VMTurbo has been built around – Demand Driven Control for the Software Defined Universe.  I know, it’s a long one but in essence it’s something that is lacking within the industry.  With the Software Defined X being introduced into our data centers, it has brought with it many benefits, and perhaps the biggest being control – we can now have software controlling our storage, software controlling our network, and in the case of automation, we have software controlling our software.  And as Eric pointed out this control is great, but useless if there is no real consistency or leverage behind whatever is controlling it – and in fact, having the demand, having our infrastructure be the driving factor behind this control is truly the answer.  VMTurbo’s Operations Manager is a product that helps us along our path to the Demand-Driven Control for the Software-Defined Universe and it does so in it’s own unique way…

Desired State – Datacenter Nirvana

herethereBefore we get into VMTurbo’s unique take on operations management I first want to talk a little bit about desired state.  Looking after a virtual datacenter we are always looking to bring our VMs, our applications, our workloads into what we consider a desired state.  This desired state essentially combines both availability and performance together, all while maximizing the efficiency of the resources and infrastructure that we have to work with.  We begin to see anomalies and performance issues when we veer away from this desired state, and traditionally, we, as administrators are tasked with bring our workloads back into that desired state.  VMTurbo states that this is where the problem lies – this human interaction both takes time – time for humans to find out about this shift, as well as time for humans to try and put the puzzle back together and get back to the desired state.  VMTurbo essentially takes the human interaction out of this equation – allowing software, in this case Operations Manager to both detect the shift from desired state but also, and more importantly take action towards moving your environment back to the desired state – thus the “control” part of the Demand-Driven Control.

And the Demand-Driven part?

This is where we see the uniqueness of VMTurbo’s Operations Manager shine through.  With Operations Manager in control, making the decisions of what VMs should run where, etc. it needs a way to look holistically at your environment.  How it does this is by taking an economic model and applying that to your infrastructure, essentially turning your datacenter into a supply chain.  Every entity of your environment either supplies or demands resources, and just as in economics when there are a lot of resources available, things are a bit cheaper.  As resources go down, things begin to get a lot more expensive.


So in terms of a VM demanding resources Operations Manager calculates the cost of those resources, again, holistically across your entire environment to determine just how those resources should be provisioned.  Think of adding more disk to a VM –  you need to look at where the disk will come from, how expanding that disk will affect other consumers (VMs) on the same datastore, how the extra capacity will affect other suppliers such as your storage array, your LUN, etc.  Operations Manager calculates all of this information in real time to determine how to best provision that storage capacity to the VM and takes action if need be to free up resources or create more supply, all while maintaining the desired state of all of your applications.

Operations Manager also goes deeper than just the VM when determining who it’s buyers are.  Through the use of WMI, SNMP, or by simply importing metrics from a third-party tools Operations Manager is able to discover services inside of your operating systems and throw them into the crazy economic market as well.  Think of things like Tomcat servers, Java Heaps, SQL Server, etc.  These are processes that may affect the demand for memory, and without insight into them making a recommendation for more memory on a VM isn’t going to help anything.  By taking these granular metrics and statistics from inside of your VMs operating system, Operations Manager can give a complete recommendation or action that will best suit your application, VM, and entire infrastructure.

It still does all the other stuff

Now VMTurbo’s supply chain model definitely sets it apart from other monitoring tools.  Also, the fact that Operations Manager can take action automatically also is a big plus when comparing the product to other tools but  you may be asking yourself what about all of the other stuff that most monitoring tools do today?  Well, Operations Manager does that as well.  Items such as right-sizing a VM, taking away or granting CPU to a VM, placement, capacity planning, etc.  Operations Manager does all of this an in fact it also applies these actions to its supply chain model, allowing the software to see just how granting another 2 vCPUs to a VM will “disrupt” the market and decide whether or not that change is “worth it”.  Operations Manager also has some decent networking functionality built-in as well.  By figuring out which VMs are “chatty” or communicating each other often, Operations Manager can make the recommendation to move these VMs onto the same host, eliminating any performance degradation or latency that could occur by having the communication move out across your network.

When VMTurbo takes action it does so in a manner of a recommendation or an action – meaning we can have the software recommend the changes to the user or we can have the software go ahead and take care of the issues itself.  Honestly this is a personal preference and I can see customers probably using a mix of both.  When calculating these recommendations and actions Operations Manager also places a transaction cost on any move it makes.  What this does is alleviate VMs from essentially bouncing back and forth between hosts trying to achieve their desired state.

Operations Manager really looks like a slick product which takes a different stance on monitoring and healing your infrastructure.  Having the application that is doing the watching do the actual doing makes sense to me, eliminating the need for the human interaction which in turn eliminates risk and certainly increases the time it takes to get back to desired state.  And I know I’ve specifically geared this post towards vSphere but honestly VMTurbo supports just about everything – think OpenStack, Azure, Hyper-V, AWS, vCloud – it’s got them all covered.  If your interest has at all peaked I encourage you to watch all of the VMTurbo #VFD4 videos here – or better yet, get yourself a trial version and try it out yourself.  Oh, and this just in – get your name in on a home-lab giveaway they are having in respect to their newest launch.

Tech Field Day – VFD4 – StorMagic A VSA living on the edge

StorMagic_Wordmark_Black_RGB_hiBefore we get too far into this post let’s first get some terminology straight.  StorMagic refers to your remote or branch offices as the “edge” – This might help when reading through a lot of their marketing material as sometimes I tend to relate “edge” to networking, more specifically entry/exit points.

 Disclaimer: As a Virtualization Field Day 4 delegate all of my flight, travel, accommodations, eats, and drinks are paid for.  However I do not receive any compensation nor am I required to write anything in regards to the event or the sponsors.  This is done at my own discretion.

StorMagic, a UK based company was founded in 2006 and set forth to develop a software based storage appliance that enterprise can use to solve one big issue – shared storage at the edge.  StorMagic is another one of those companies that presented at VFD4 that really had a strong sense for who their target markets are – they aren’t looking to go into the data center (although there is no reason that they can’t), they aren’t looking to become the end all be all of enterprise storage (although I’m sure they would love that) – they simply provide a shared, highly available storage solution for those companies that tend to have many remote branch offices with a couple of ESXi (or Hyper-V) hosts.  On the second day of VFD4 in a conference room at SolarWinds, StorMagic stood up and explained how their product, SvSAN can solve these issues.

Another VSA but not just another VSA

Choosing between deploying a traditional SAN vs. a VSA is a pretty easy thing to do – most the times it comes down to the shear fact that you simply don’t have enough room at your remote site to deploy a complete rack of infrastructure, nor do you have the resources on site to manage the complexity of a SAN – so a VSA presents itself as a perfect fit.  With that said there are a ton of VSA’s out on the market today so what is setting StorMagic apart from all the other players in the space?  Why would I chose SvSAN over any other solution?  To answer these questions let’s put ourselves into the shoes of a customer in StorMagic’s target market – a distributed enterprise with anywhere from 10 to 10000 remote “edge” offices.

One of the driving forces behind SvSAN’s marketing material is the fact that you can setup your active/active shared storage solution with as little as 2 nodes. 2 – Two.  Most VSA vendors require at least a three node deployment, and justifiably they do this to prevent a scenario called split-brain.  Split-brain is a scenario where nodes within a clustered environment become partitioned, with each surviving node thinking it’s active, which results in a not so appealing situation.  So how does StorMagic prevent split-brain scenarios with only two nodes?  The answer lies in a heartbeat mechanism called the Neutral Storage Host (NSH).   The NSH is recommended and designed to run centrally, with one NSH supporting multiple SvSAN clusters.  Think one NSH supporting 100 remote SvSAN sites.   The NSH communicates back and forth with the SvSAN nodes in order to determine who is up and who is down, thus being the “tie breaker” if you will in the event the nodes become partitioned.   That said the NSH is an important piece to the SvSAN puzzle and it doesn’t necessarily need to run centralized.  For those sites that don’t have good or any bandwidth, the NSH can be run on any Windows, Linux, Raspberry Pi device you want, locally at the site.  Beyond the heartbeat mechanisms of the NSH, SvSAN also does a multitude of things locally between the two nodes to prevent split brain as well.  It can utilize any one of its networks, be it management, iSCSI, or mirroring network to determine and prevent nodes from becoming partitioned.  So with all this what advantages come from not requiring that third node of compute within the cluster – well, one less VMware license, one less piece of hardware you have to buy and one less piece of infrastructure you need to monitor, troubleshoot and backup – which can add up to a pretty hefty weight in loonies if you have 10000 remote sites.


Aside from lowering our infrastructure requirements SvSAN brings a lot of enterprise functionality to your remote sites.  It acts in an active/active fashion, synchronously replicating writes between each node.  When a second node of SvSAN is introduced, a second path to the VMs storage is presented to our hosts.  If at any time one host fails, the other host containing the mirrored data can pick up where it left off, which essentially allows VMware HA to take our VMs that were running on local storage on the failed host, and restart them on the surviving host using local storage.   While the failed node is gone, the surviving SvSAN journals and writes meta data around the changes that occur in the environment, minimizing the time that it will take to re-synchronize when the original node returns.  That said the original node isn’t required for re-synchronization – the benefits of the SvSAN architecture allow for the second node to come up on different hardware or even different storage.  This newly added node will be automatically configured, setup and re-synchronized into the cluster, same goes for the third, the fourth, the fifth node and so on, with just a few clicks.

As far as storage goes, SvSAN can take whatever local or network storage you have presented to the host and use that as their datastore.  The appliance itself sits on a datastore local to the host, somewhere in the terms of 100GB – from there, the remaining storage can be passed straight up to SvSAN in a JBOD, RDM, or “vmdk on a datastore” fashion.  SvSAN also gives us the ability to create different storage tiers, presenting different datastores to your hosts depending on the type of disk presented, be it SATA, SAS, etc.  In terms of SSD, SvSAN supports either running your VMs directly on solid state datastores, or you can carve up SSD tier to be used as a write-back cache to help accelerate some of those slower tiers of storage.


In terms of management, StorMagic is fully integrated into the vSphere Web Client via a plug-in.  From what I’ve seen, all of the tasks and configuration that you need to perform are done through very slick, wizard driven menus within the plug-in, and for the most part StorMagic has automated a lot of the configuration for you.  When adding new nodes into the VSA cluster, vSwitches, network configurations, iSCSI multipathing – they are all setup and applied for you – when recovering existing nodes, surviving VSA’s can push configuration and IQN identifiers down to the new nodes, making the process of coming out of a degraded state that much faster.

Wait speaking of VMware

VMware LogoWorse transition ever but hey, who better to validate your solution than one of the hypervisors that you run on.  As of Feb 4th, VMware and StorMagic have announced a partnership which basically allows customers to couple the new vSphere ROBO licensing with a license for SvSAN as well.  Having VMware, who took a shot at their own VSA in the past (ugh, remember that!) chose your product as one they bundle their ROBO solutions with has to be a big push of confidence for both StorMagic and their potential customers.  You can read more about the partnership and offering here – having both products bundled together is a great move on StorMagic’s part IMO as it can really help push both adoption and recognition within the VSA market.

Should I spend my loonies on this?

IMO StorMagic has a great product in SvSAN – They have done a great job in stating who their target market is and who they sell to – and defending questions to no end with that market in mind.  HA and continuous up time is very important to those enterprises that have distributed architecture.  They’ve placed these workloads at the “edge” of their business for a reason, they need the low latency, and honestly, the “edge” is where a company makes their money so why not protect it.  With that said I see no reason why an SMB or mid market business wouldn’t use this within their primary data center and/or broom closet and I feel they could really benefit by maybe focusing some of their efforts in that space – but that’s just my take, and the newly coupled VMware partnership, combining SvSAN with the ROBO licenses kind of de-validates my thinking and validates that of StorMagic – so what do I know Smile. Either way I highly recommend checking out StorMagic and SvSAN for yourself – you can get a 60 day trial on their site and you can find the full library of their VFD4 videos here.

Share All Of The Content – Automation around sharing community blogs on Twitter

sharememeAttending an X Field Day event for me has been awesome – there are a ton of perks, you get to hear deep-dives directly from vendors, usually involving CTO/CEO/Founder type people.  You get to meet an amazing panel of fellow delegates and develop friendships.  But aside from all this there is one huge benefit that usually goes un-blogged; and that is you get to hear stories and experiences from Stephen Foskett himself – and he has a lot of them.  One in particularly caught my attention – as he explained all of the behind the scenes automation that occurs in terms of building the TFD webpages and sharing all this information out on his respective social media channels.  So, as soon as we, as a delegate click ‘Publish’ there is a ton of IFTTT recipes, Zapier functionality and custom scripts that takes our posts, tags relevant vendors/authors and kicks off the craziness that is Foskett’s brain in code.  It’s really quite amazing.  So amazing that I thought I might try my hand at my own.  Now I am by no means at the level that Mr. Foskett is at – but it’s a start – and here’s how it goes…

My Brain

So what I set out to accomplish was simple.  I wanted to be able to flag, Digg, or star (chose whatever terminology for the RSS reader of your choice) blog posts as I read them – ones that I thought were awesome – From there, the posts would be transformed into a tweet, mentioning the author and sent out on my twitter account at a later time, keeping in mind I would need to buffer these as I could be “digging” 20 posts at a time.

My Brain on Code

So here is how I accomplished that task of taking those random ideas from my brain and transforming them into code.  There is a few pre-reqs and different technologies used, so ensure you have these if you plan on duplicating this.

  • Twitter – yeah, you need a Twitter account.
  • Twitter App – you need to setup an app within that Twitter account – this is what will allow the rest of this craziness to actually connect into your account and send the tweet – we will go over the how to on this
  • Google Drive – The core scripting behind all of this is done in Google Script in behind a Google Spreadsheet – so, you will need a Google account.
  • Digg – I use this as my RSS reader so if you are following along step by step you might want to set this up.  If you use another, I’m sure you can figure out how to get your favorite posts from the reader into Delicious
  • Delicious – I implemented this as a middle man between my RSS reader and the Google Spreadsheet simply due to the fact that I may want to share out content that isn’t inside of my RSS reader.  I can easily add any content into Delicious.
  • IFTTT – You will also need an If This Then That account setup as we will be using recipes to move content from Digg into Delicious, and further more from Delicious into the Google Spreadsheet.  I use IFTTT for a ton of other “stuff” to make my life easier.  You should already have an account here setup Smile

So, the concept is as follows

  1. I “digg” a post within digg – IFTTT then takes that post and creates a Delicous bookmark with a specific tag “ShareOut”  I could also just use the Delicious chrome plug-in to tag any current site that I’m on as a bookmark as well.
  2. IFTTT then takes any new public bookmarks with the “ShareOut” tag that I have created and adds them into a Google Spreadsheet, containing the blog Title, URL, and a “0” indicating that I haven’t processed this yet.
  3. The spreadsheet contains a “CheckNew” trigger/function, which runs every x minutes/hours.  This takes any rows with a “0” (new ones from Delicious) and transforms them into a tweet, shortening the URL, ensuring we are under the 140 characters and adding the authors Twitter handle.  It then places this tweet into the next blank row on the “Tweets” sheet and updates the processed column to “1”
  4. The spreadsheet contains a “SendTweet” trigger/function, which runs once an hour – this simply takes the first tweet on the “Tweets” sheet and tweets it out, then deletes it from the spreadsheet, allowing the next Tweet to move up in the queue and be processed in the next hour, repeat, repeat, repeat.  Every hour until the sheet is empty.

So let’s set this stuff up.

First up we need the spreadsheet created within Google Drive – you can copy mine if you like as it already has a sheet which maps the top 100 vBloggers (and some more) blogs to Twitter (this is the only way I could figure out how to map Twitter handles to blogs).  Either way it will need to be setup the same if you create it new.

Next, setup the recipes in IFTTT to copy Diggs to Delicious and subsequently Delicious to Google Spreadsheet –  IFTTT is pretty easy to use so  I’ll leave  it up to you to figure out how to get the data to Delicious and the spreadsheet.  Just ensure if you are using “Add Row to Spreadsheet” as a that – that you populate all three columns in the spreadsheet in the following order (Title, URL, 0) – The “0” needs to added in order for the Google Script to function properly.  Let me know if you need a hand.

Now we need to setup an app to allow Google script to send the Tweets for us.  Log into your Twitter account and head to .  Once there click the “Create New App” button in the top right hand corner.  Most of the information you put here doesn’t matter, with the exception of the Callback URL – this needs to be “”


Once created, click on the Permissions tab and ensure that the access is set to “Read and Write”


Now we need to get the Consumer Key and Consumer Secret – This is on the “Keys and Access Tokens” tab and we will need to copy to use within our Google Script later, so shove it over to notepad or vi or something Smile


Now we are done with Twitter and it’s time to get into the good stuff!  Open up your Google Spreadsheet and make sure you have the following sheets within it.

  • Sheet1 – named exactly this and it will be a place holder for the information coming from IFTTT
  • BlogTwitter – This sheet contains all of the Blog->Twitter handle mappings.
  • Tweets – This sheet will be a place holder for our Tweets

Again, feel free to simply copy my spreadsheet – it may make it easier and already has the BlogTwitter sheet populated.

As far as just setting up the sheets with the above names in the same order there is nothing we really need to do on the Spreadsheet – it’s the code behind we really need.  To get here, select Tools->Script Editor.  When the dialog appears select “Blank Project” under “Create New Script For”.  If you copied my sheet you will simply be brought into an already existing blank project.

Before we can get started with the code there are a couple of things we need to do.  Since I use the Google URL shortening service you will need to enable this in the project resources.  This is done under Resources->Advanced Google Services”  Find the URL Shortener API and switch it to On.  You will also need to turn this service on within the Google Developers Console – the link to do so is right within that same dialog – go do that.

shortener enable

So, as far as the code goes I’m just going to dump it all right here so you can just copy/paste all of it – I’ll explain a few things about it underneath.

function sendTweet(){
var oauth = false;
function authTwitter(){
var oauthConfig = UrlFetchApp.addOAuthService('twitter');
var requestData = {
'method': 'POST',
'oAuthServiceName': 'twitter',
'oAuthUseToken': 'always'
var ss = SpreadsheetApp.getActiveSpreadsheet();
var sheet = ss.getSheetByName("Tweets");
var tweet = sheet.getActiveCell().getValue();
var encodedTweet = encodeURIComponent(tweet);
if (tweet!='') {
if (!oauth) {
oauth = true;
UrlFetchApp.fetch('' + encodedTweet, requestData);
function checkNew()
var ss = SpreadsheetApp.getActiveSpreadsheet();
var iftsheet = ss.getSheetByName("Sheet1");
var values = iftsheet.getDataRange().getValues();
for(var i=0, iLen=values.length; i<iLen; i++) {
if(values[i][2] != "1") {
// get Twitter Hangle
var urlPath = values[i][1].split("/");
var baseURL = urlPath[2];
var twitterHandle = findTwitterHandle(baseURL);
// get other data
var postTitle = values[i][0];
var postURLLong = values[i][1];
var URLShort = getShortenedUrl(postURLLong);
// build tweet string
var myTweet = buildTweet(postTitle, URLShort, twitterHandle);
// place variable in next available row on Tweets spreadsheet
var targetSheet = ss.getSheetByName("Tweets");
var lastRow = targetSheet.getLastRow() + 1;
var targetCol = "A" + lastRow.toString();
values[i][2] = "1"
function getShortenedUrl(url){
var longUrl = UrlShortener.newUrl();
var shortUrl = UrlShortener.Url.insert(longUrl);
return shortUrl.getId();
function buildTweet(postTitle, postURL, twitterHandle)
var tweet = "[Shared] " + postTitle + " - " + postURL;
if (typeof twitterHandle != "undefined")
tweet += " via " + twitterHandle;
var tweetlength = tweet.length;
if (tweetlength > 140)
var charsToTrim = tweetlength - 135;
postTitle = postTitle.substr(0, postTitle.length-charsToTrim);
tweet = "[Shared] " + postTitle + " - " + postURL;
if (typeof twitterHandle != "undefined")
tweet += " via " + twitterHandle;
return tweet;
function findTwitterHandle(blogurl) {
var twitterSheet = SpreadsheetApp.getActiveSpreadsheet().getSheetByName("BlogTwitter");
var values = twitterSheet.getDataRange().getValues();
for(var i=0, iLen=values.length; i<iLen; i++) {
if(values[i][0] == blogurl) {
return values[i][1];

Copy and paste all of the above code into the blank file that was created.  First up, remember that consumer key and secret from Twitter – yeah, they will need to go in their respective spots in lines 3/4.  That is really all of the edits to code you need to make.  If you look at the script there are a few different functions.  checkNew() (Line 45) – this is what takes the data from IFTTT on Sheet1 and transforms it into tweet-format, then places it on the “Tweets” sheet.  You can see it calls out some other functions which shorten the URL and ensure the tweet is under 140 characters, as well as flag a 0 to 1 on Sheet1 (ensures that we don’t’ tweet the same thing twice).   The sendTweet() function (Line 1) – this takes whatever is in A1 on the Tweets sheet and, you guessed it, tweets it.  When its done, row 1 is deleted, allowing the next tweet to move into A1 and be processed next time the function runs,

To test, put some data in the first sheet, Column A – the blog post title, Column B, the blog post URL, and a “0” in column C.  You can do this manually for now if you want or if you have the IFTTT recipes setup let them take charge.

Then, at the top of the script editor change the function drop-down to “checkNew” and click “Run”.  If you don’t see the functions listed within the drop-down you may need to “File-Save” and completely close out both the script editor and the spreadsheet.


Once this has complete its’ run you should be able to flip back to the spreadsheet and see a Tweet sitting inside A1 of the “Tweets” sheet.  The data from “Sheet1” should also be updated with 1 flag.

From here it’s a matter of flipping back to the Script Editor and running the “sendTweet” function the same way you did the “checkNew”.  You will most likely be prompted to authorize Google as well as authorize Twitter during this process.  Go ahead and do that.  Sometimes I have found that the first time you run it you will need to authorize Google, then you will need to run it again to authorize Twitter.  Once both applications are authorized your tweet should have been sent out!

So this is all great, everything is working.  Now to setup triggers for these functions as running them manually doesn’t make much sense.  To get into your triggers select Resources->All your triggers from the script editor.


As you can see I have set up two.  One that runs my checkNew function every 30 minutes – if it finds a newly inserted row from IFTTT it will build the tweet for me.  The other, sendTweet runs once an hour – this will simply take one of the tweets and send it out!  This way, if there are tweets available it will tweet just one per hour, so I don’t flood everyone with tweets!

And that’s that!  I know this was a long winded post but I wanted to convey what I was trying to accomplish and how I did it!  Hopefully some of you find it useful if you are looking to deploy a similar type solution.  There is a ton of great content out there and this is just my part on helping to spread the good word within the virtualization community.

If you are trying to get through this and need a little help don’t hesitate to reach out – I’m more than willing to help!  If you have any ideas on how to better this process I”d love to hear them as well.  The part currently nagging at me is that manual BlogTwitter sheet – not sure how this part could be automated or bettered but if you have ideas let me know…  Thanks for reading!  And sharing Smile

Tech Field Day #VFD4 – Platform9 – Private cloud in 10 minutes or its free…

Let’s set the scene


You are at your desk when your CIO can be heard descending down from the marble floored, nicely painted floor above you.  They stop and ask for an update on the private cloud initiatives.  As you begin laying out all your options they quickly intervene quoting several articles they read in the in flight magazine on their return trip from Hawaii.  The end of the conversation caps off with an “OpenStack, it’s all the rage, and we need to be doing it, so, do it!”  The endless articles and blogs that you have read in regards to OpenStack and it’s complexity suddenly rush back into your brain.  As the sweat begins to drip off your nose you begin wondering if your planned trip to Hawaii is even feasible now that you will be wrapped up in the OpenStack project for the next year.  Your coworker, whom was quaintly eavesdropping on the whole conversation promptly fires off a “FYI” into your inbox containing the link to all the Platform9 videos from VFD4 stating they might help, followed up with some nervous text around the whereabouts of his stapler.

 Disclaimer: As a Virtualization Field Day 4 delegate all of my flight, travel, accommodations, eats, and drinks are paid for.  However I do not receive any compensation nor am I required to write anything in regards to the event or the sponsors.  This is done at my own discretion.

FYI – Platform9 at #VFD

Platform9Logo-wpcf_100x42Platform9, the 2nd presenter on the 1st day of VFD4 featured Sirish Raghuram (CEO) and Madhura Maskasy (Head of Product) showcasing their OpenStack as a Service solution.  Throughout all of the Platform9 presentations you could really see one simple message shine through, and that message just happens to be the Platform9’s manifesto

Platform9’s mission is to make private clouds easy for organizations of any scale.

So how does Platform9 do this?  Well, ironically they use one of the most complex cloud projects out there – OpenStack.

Why OpenStack?

So, why would Platform9 go down the OpenStack road given its reputation of being difficult and time consuming to implement and maintain – well, it all comes down to how Platform9 envisions the future of private cloud.  Sure, private cloud will have all of the usual components – self service, orchestration layers, resource pooling and placement however Platform9 looks beyond this stating that the private cloud of the future must span virtualization technologies – essentially not differentiate between ESXi, KVM, or Hyper-V.  On top of just spanning hypervisors, Platform9 states future private clouds must also span containers, as products like Docker have began to be all the buzz these days.  However one of the biggest aspects of Platform9’s vision of private cloud is that it must be open source and since OpenStack is the 2nd largest open source project ever, coupled along with it’s incredible drive from the community – hey, why not OpenStack?os-pf9

So when looking at all of these requirements – orchestration, hypervisor agnostic, container integration, resource pooling OpenStack begins to make a lot of sense, seeing as it meets all the criteria defined in Platform9’s vision of private cloud.

Wait a minute, you said as a service, isn’t OpenStack installed on-premises?

There are a couple of common ways enterprise can deploy OpenStack today.  The first being completely on premises, utilizing existing or new infrastructure, all within your data center walls.  This scenario is great in the fact that you have complete control of your OpenStack environment.  The management and data all resides safely within your business.  However this scenario can also pose challenges as well.  To implement OpenStack on premises, businesses need to have the resources to do so – and those resources encompass both skill-sets and time – both of which can add up in terms of dollars.  You first need the skill-sets available – the know how if you will on how to design, implement, and manage OpenStack.  Secondly, you need the time.  Time to manage the infrastructure, keep up with updates and upgrades that as we know can come like rapid fire in the open source world.

The second common deployment method of OpenStack is a hosted private cloud.  In this scenario a service provider is leveraged which completely hosts a companies OpenStack deployment.  In most cases they look after the installation and configuration, the management and updates, removing this burden from the customer.  That said, this model does not allow companies to utilize existing infrastructure and usually results in a greenfield deployment.  And depending on the scale of the infrastructure needed, in some ways this can cost just as much as  a public cloud instance and your data still sits outside of your companies data center.

Platform9 takes an approach that merges both these scenarios, giving you the best of both worlds – in essence, they abstract the management layer of OpenStack from the hypervisor/data layer.  In the end you are left with your data sitting on your existing infrastructure and the OpenStack management layer running and managed by Platform9 within their infrastructure.   Thus you get all of the benefits of OpenStack private cloud, with out the complexity or requirements of setting it all up and managing it.

The nuts and bolts

nutsbolts I know I know, enough private cloud blah blah blah OpenStack blah blah blah how the heck does this work?  First up let’s look at what Platform9 actually delivers from within OpenStack.  Platform9 handles the key modules in OpenStack – that includes Keystone, Nova, and Glance and delivers them to you through their cloud.  You only provide the compute (BYOC?) and the infrastructure to power them.  What you won’t see inside of Platform9’s solution is Horizon – this is replaced by their own custom built dashboard.

Once you are ready you can simply download and install Platform9’s agent to any of the Linux/KVM instances that you would like to pair with your new OpenStack solution.  Once initiated the agent will begin discovering information about anything and everything there is to know about your environment – this information is then encrypted and sent back to Platform9 to be reported back inside of the dashboard.  From the dashboard roles can be assigned to your physical servers – this is a KVM server, this is my image catalog (glance) etc and changes are reflected back down to your hardware.  That said, Platform9 is not just for greenfield deployments – if you already have KVM running on that physical server your VMs and images are seamlessly imported into Platform9’s OpenStack as instances, thus the whole “leverage your existing infrastructure” play.


Capacity is automatically calculated and reported into the Platform9’s dashboard, allowing customers to quickly see what CPU, Memory, Storage they have available, deployed, consumed, etc…  The custom HTML5 Platform9 dashboard is quite slick and easy to navigate.  It supports multiple tenants, users, and tiers of infrastructure, which can be stretched across multiple data centers.  Meaning you could assign a specific user a specific amount of resources (CPU, Memory, Storage, Networks) which come from specific resource tiers or data centers.

Once environments are discovered and imported into Platform9 the custom dashboard then becomes the entry point for all OpenStack management and API calls.  The dashboard will take those API calls and execute them accordingly, instructing the agents and your local infrastructure to take appropriate action.  All OpenStack APIs are still available within Platform9

Wait!  You said vSphere earlier – that was all KVM stuff!

vsphere Whoops!  Did I forget to talk about vSphere integration!  The good news is that since VFD4 and the now Platform9’s OpenStack has entered GA for use with KVM, and the vSphere integration beta has been announced.  This means that all of Platform9’s OpenStack management functionality can also be used with your existing vSphere environment, with the agent simply being deployed as an ova appliance rather than directly on the hypervisor itself.  In turn, your existing VMs will be discovered by Platform9 and imported as instances within the dashboard.  Your templates are converted and imported into Platform9’s OpenStack image catalog – Basically, all of the functionality that is there with KVM is also available within vSphere, allowing you to manage both your KVM and vSphere environments side by side.  Just replace KVM in the above section with vSphere. Smile  Oh, and Docker integration is in the works!

What’s OpenStack and what’s Platform9

choice Seeing as how the OpenStack project is ever changing, meaning there is plenty of work being done on each and every component of OpenStack provides a bit of a challenge for Platform9 when they are deciding what to include/exclude in their product.   Take the topic of reporting for instance; There is a service being developed called Ceilometer inside of the OpenStack project that will handle all of the collection of utilization data in order to present for further analyses – so Platform9 has opted to wait for ceilometer before enhancing some of their reporting functionality.  No point in vearing away from a vanilla OpenStack if you don’t have to.  That said, some things can’t wait.  The auto discovery of your existing workloads and infrastructure that Platform9 does is not native to OpenStack – this is something that they have went a head and developed as a value-add to their solution.  Platform9 is also looking to enhance functionality of the different components, services, and drivers that already exist within OpenStack.  Take the vSphere driver for instance – Platform9 is working on solutions to get support for more than one cluster per vCenter inside their environment.  They are working on solutions to natively access the vSphere templates rather than performing copy/move functionality.  They are looking to leverage the capabilities of the vCenter customization specifications directly from Platform9.   The also note that they are dedicated and have full intent on pushing all of this work back into the OpenStack project – a value that every company should have.

So in the end do I think that Platform9 has achieved their mission of making private clouds easy for orgs of any scale – absolutely.  The key differentiator for me with Platform9’s service is the shear fact that you can not only use your existing infrastructure, but you can do this in a way that is simple, seamless and all encompassing in terms of discovering your workloads and templates you currently have.  In the end, you are left with your KVM/VMware environment, managed by Platform9’s OpenStack, setup within minutes, leaving you with a lot of free time to oh, I don’t know, look for Nelson’s stapler.


Now I know I titled this post in 10 minutes or its free but guess what? It’s free anyways! – You can try out Platform9 on 6 CPUs and 100 VMs absolutely free!  For more info definitely check out Platform9’s site and all of the #VFD4 videos here!