Tag Archives: Community

A Google Cloud to Amazon vMotion – The Ravello Way!

v2Ravello_Logo_largeToday Ravello Systems, a company based out of Palo Alto and Israel announced a new beta, a beta that I think is going to go over very well within the VMware community – one that will allow us to spin up vSphere labs, complete with vCenter Server, ESXi hosts, Domain Controllers, Storage and Network services and all the VMs that go with the punch inside of Google and Amazon’s cloud.  To be honest I was kind of skeptical when I first started working with Ravello?  I mean, come on, an ESXi host in Amazon, let alone and ESXi host running VMs inside of Amazon, an ESXi host running VMs with little to no performance penalty, all running within Amazon – you can see why I might of cringed a bit.  But Ravello gave me a shot to try it for myself – and during the introductory chat as they were showing me how things worked I thought, hey, what a use case for the new cross vCenter vMotion capabilities in vSphere 6!  A lab in Amazon, a lab in Google Cloud, and VMs migrating between them – how cool is that?

Who and what is Ravello Systems?

Now, before I get into the details of the vMotion itself I want to take a step back and explain a little bit about Ravello Systems themselves, and what they have to offer.  Ravello was founded in 2011 with the sole purpose of supporting and driving nested virtualization to the next frontier and did so when they launched their product globally in August of 2013 (You had to of seen the scooters at VMworld 🙂 )  They didn’t just want to simply provide an environment for nested virtualization though, they wanted to make it simple and easy for companies to replicate their data center infrastructure into the public cloud.  The core technology behind all of this is their HVX hypervisor – essentially acting as a Cloud VM, sitting in either Amazon or Google and providing overlay networking and storage to the VMs that are placed on top of it.

RavelloHVX

As per the diagram above the VMs present can be built from scratch or imported via an OVA within Ravello’s very easy to use intuitive interface – but perhaps more interestingly you can utilize the Ravello Import Tool(??), point it to your ESXi host or vCenter, and import VMs directly from your environment into the cloud!  But they don’t stop there, Ravello can also detect and create every network your VM is attached to, deploying an exact duplicate of your network infrastructure!  Now if this wasn’t good enough for you the beta today announces the ability to support Intel VT through HVX – which means we can now run VMs on top of ESXi on top of HVX on top of Amazon or Google!  True inception leaving us with a setup shown in the diagram below.

RavelloHVXVT

A great place to break things!

There is a reason why Ravello dubs their technology as having the ability to create “Smart Labs”!  Throughout my early access to the solution I broke and fixed so many things within my applications – and Ravello always gave me a way to rebuild or reconstruct my labs in a very efficient manner.

RavelloSaveToLibraryFirst up we are able to save our VMs to the library – which is essentially a personal set of VMs and images that we can re-use in all of our applications.  For example I only had to build my ESXi 6.0 image once – after saving this to the library I was able to simply drag and drop this VM as many times as needed to as many applications as needed, simply re-ip and re-naming after I was done.

RavelloSaveToBlueprintHaving the ability to re-use VMs is cool but the blueprint functionality that Ravello provides is really where I see value!  We are able to take a complete application, in my instance an ESXi host, domain controller, vCenter Server, etc and save the entire application as a blueprint.  Blueprints are then available to be used as starting points for new applications – meaning I can build a complete lab on Amazon, save as a blueprint, and then publish a new application to Google which is an exact identical copy, networks and all.  Blueprints are an excellent way to test out the different public clouds as well as version or snapshot your entire lab before making any major changes – if things go awry you can simply republish your saved blueprint to a new application.

RavelloBlueprints

Enough talk – Let’s see the vMotion!

Alright!  Let’s get to it!  Let me first warn you, the environment I built to do this was quick and dirty – not a lot of polishing going on here.

The two applications we will be using are Google-vxlan and EC2-vxlan – I’ll let you guess which public clouds each is published to.

ravellovmcanvas

As shown above these applications are pretty similar; each containing an Ubuntu server (used to establish the vxlan tunnel between EC2 and Google), a pfSense appliance that provides a VPN for my vMotion networks, a vCenter Server (the Windows version), and an ESXi host (just one for now).  The EC2 application also contains a jumpbox VM which provides entry into the local network as well as DNS services.

ravelloNetworkingboth

As far as networking goes the setup at both Amazon and Google is almost identical with the exception of the jumpbox.  The 192.168.0.0/24 network is available at both EC2 and Google.  The 10.0.0.0/24 network is the only network that is routed to the internet, therefore my only access into the labs outside of the Ravello GUI – this is why the jumpbox also has a connection to this network – to act as an RDP gateway of sorts.  The two Ubuntu servers have an elastic public IP attached to them in order to ensure the public IP doesn’t change and mess up my vxlan config.  The free trial of Ravello gives you two elastic IPs, and four other DCHP public IPs (subject to changing every now and then).  The vxlan tunnel is established between the two elastic IPs in order to provide Layer 2 connectivity between Amazon and Google.  The pfSense boxes each have a dynamic public IP attached to them with an IPSEC tunnel established between the 192.168.1.0/24 and the 192.168.2.0/24 networks.

vsphereshotOn the VMware side of things I have two vCenters with embedded PSCs (i know – bad practice) – one in Amazon and one in Google, which are attached to the same SSO domain and configured in Enhanced Linked Mode.  Therefore whatever is at Google can be seen at Amazon and vice versa.  As far as vMotion goes I’ve simply enabled this one my existing management interfaces (more bad practice – but hey, it’s a lab).  There is local storage attached to the ESXi hosts and one VM named EC2-VM1 present.

So my goal was to migrate this VM from Amazon to Google and back again, taking both the compute and storage with it.  Now just writing about a vMotion is not that exciting so I included a video below so you too can see it move 🙂  It’s my first attempt at a video and had some screaming kids while I made it so yeah, no narration – I’ll try and update with a little tour of the Ravello environment later 🙂

So there you have it – a VM moving from Amazon to Google and back, all while maintaining its’ ping response – pretty cool!

Is Ravello worth it?

esxi-home-labSo, with all this the question now remains is Ravello worth the cost?  Well, considering as how Ravello estimates the cost of a two ESXi Node, vCenter and Storage lab to be on average $0.81 – $1.71 per hour (usage based, no up front costs) I would certainly say it is!  The ability to run nested ESXi hosts on top of the public cloud provides a multitude of use-cases for businesses – but honestly I see this being a valuable tools for the community.  I plan on using Ravello solely for my home lab usage over the next year or so – it’s just so much nicer to break things and simply re-publish an application than it is to try and rebuild my lab at home.  If you want to give Ravello a shot you can sign up for the beta here – Even after the beta expires you simply swipe your credit card and pay Ravello directly – no Amazon accounts, no Google bills – just Ravello!  You will be limited during the beta’s and free trials in the amount of CPU, RAM and concurrent powered on VMs but they definitely give you enough resources to get a decent lab setup.

Ravello has a great solution and certainly expect more from me in regards to my lab adventures in the public cloud.

Disclaimer: Ravello gave me early access to their ESXi beta in order to evaluate their solution – I first signed up for a free trial and did get the amount of RAM and number of VMs available to me increased.  They didn’t however require that I write this post nor write anything for that matter, or provide any $$$ for anything that was written – these are all my words!

Veeam announces GA of Veeam Endpoint Backup

During their first inaugural VeeamON conference last October Veeam announced the beta of Veeam Endpoint Backup.  I wrote a little overview in regards to Endpoint Backup in case you need a refresher.  Now, Veeam’s Backup and Replication has long been infamous for being purpose built for the virtual data center, and Endpoint Backup is the companies answer to bringing the same great Veeamy-tech to your physical laptops and desktops.  Today, that announced beta has ended and Veeam Endpoint Backup is now generally available.

So what’s changed since the beginning of the beta?

VeeamEndpointA lot actually!  Being in beta for 6 months has really helped Veeam to ensure that they are releasing a genuinely, tried and tested, rock solid product into the market.  In fact, throughout the beta many of the new features now included in Endpoint Backup were suggested by users just like you and me on the community forums surrounding the beta.  Veeam, like always have done a great job taking into account user feedback and delivering a product that’s packed full of useful features and “just works”.  There are a lot of features to VEB and you can see them all here – but, I’d like to go over a few of my favorites.

Integration between VEB and VBR

Coupling Patch #2 of Veeam Backup and Replication (released later this month) alongside the GA of Veeam Endpoint Backup brings some awesome functionality of being able to monitor, control and restore endpoint backups within VBR.  By backing our endpoints up directly inside a Veeam backup repository we are now able to take advantage of many of the traditional VBR restore goodies with our physical backups.  Aside from simply file level recovery, application items, such as being able to restore SQL tables, Exchange and Active Directory objects – they can all be performed on our physical backups now as well.  Although the product is geared towards endpoints, meaning desktops and laptops, I see no reason why you couldn’t install it on some of those last physical servers you have laying around.  In fact, Veeam says themselves that although it isn’t built for servers it will work on Server 2008 and above.

VeeamEndpointToVBR

Veeam has added the ability to export our physical disks from the backups directly into a vmdk, vhd, or vhdx file as well.  Now this isn’t a true P2V process, they aren’t removing any drivers or services or preparing the disk to be virtual in any way – this isn’t their intention.  This is simply another way to recover, another way to get the data you need – and honestly, if you wanted to try and build a VM out of these exported disks I’m sure there will be posts around the process out there in the next few months on how to do so.

SecPermissions

In terms of security Veeam has added the ability for administrators to set access restrictions on their backup repositories.  What this does is allows us to grant access to certain repositories to certain users, while restricting access to others.

Aside from the new integration, Veeam Endpoint Backups which are stored in a Veeam backup repository can also take advantage of existing VBR features, such as encrypting your backups, traffic throttling, monitoring incoming backups, email status alerts and support for Backup Copy and Tape jobs to get those backups offsite.

It’s not just about B&R

Sure, the integration’s with VBR are pretty cool but they aren’t the only thing that’s included.   Yeah, we have all of the traditional endpoint backup features like incremental’s, multiple target options, and scheduling but it wouldn’t be a Veeam product without a few extra goodies baked in.  I’m not going to go in depth about them all, but listed below are a few of my favorites

Full support for Bitlocker drive encryption – This gives you the ability to de-encrypt your Bitlocker backups before restoring, directly from with the Endpoint GUI.

Ability to control the power state of computer post backup – If you have your computer set to backup at the end of your work day, you can leave knowing that once your backup has completed Veeam will, in true green fashion, power down your workstation.

Backup triggers such as “When backup target is connected” – Veeam will monitor for when you plug in that external USB drive or connect to the network that you have setup as your backup target and can trigger the backup process immediately there after.

Support for rotated USB drives – If you want to rotate your backups on one USB drive one week and another the next, Veeam Endpoint Backup can handle this for you, allowing you to backup to one drive while the other goes offsite.

On-battery detection – Backups can be automatically prevented from starting when Veeam detects that your laptop is running on-battery and contains less than 20% run time – ensuring VEB doesn’t chew up valuable power in your time of need 🙂

So what hasn’t changed?

freeWe talked about what has changed since the beta bits were first shipped in November but perhaps the most important and most cared about feature lands in the “What hasn’t changed?” category.  What hasn’t changed is that Veeam Endpoint Backup was put into beta as a free product and will remain free now that it is generally available.  Veeam has a long history of providing free tools for the community, they have Backup and Replication Free, SQL/Active Directory, Exchange Explorers are free, the old FastSCP which was free and now Veeam Endpoint Backup Free!  There should be no barrier to stopping you from going and checking out VBR for yourself.

Now in my VeeamON post I tried to determine the future of this product, where it would fit in, what features Veeam would add to it – and honestly I was way off on a lot of them – but one I was sure would come would be the integration with Backup and Replication – and it’s here now!  Do I think Veeam are done innovating in this area?  Absolutely not!  From my experiences Veeam is a company that never stops moving.  I’m excited to see Veeam Endpoint Backup go GA, and I’m excited to see what the future holds.

Friday Shorts – Certs, Tools, Loads, VVOLs and #SFD7

It’s been quite a long time since my last “Friday Shorts” installment and the links are certainly piling up!  So, without further ado here’s a few tidbits of information that I shared over the last little while…

A little bit of certification news!

VMware LogoVMware education and certification has certainly taken it’s fair share of backlash in the last few months, and honestly it’s rightly deserved!  People don’t like when they invest in a certification, both in money and time, just to have an expiry date placed on all their efforts!  Either way, that’s old news and nothing is changing there.  What I was most concerned about was whether or not I would be able to skip my upgrade of my VCP and just take a VCAP exam instead, which would in turn re-up my VCP.  Then the announcement of no more VCAP was made – which through those questions of mine for a loop – but now, after this announcement it appears that their will be an upgrade/migration path for those current VCAP holders to work towards the newly minted VCIX.  Have a read and figure out where you fit in and start planning.   I already hold a VCAP5-DCA so by taking the design portion of the VCIX I would be able to earn my VCIX certification in full – sounds good to me!  Now we just need the flipping exams blueprints to come out so we all can get to studying! 🙂

New version of RVTools!

rvtoolsYup, the most famous peice of “nice to haveware” has an updated version.  I’ve used RVTools for quite some time now – as an administrator any piece of free software that I can get to help me with my job is gold!  RVTools saves me a ton of time when gathering information as it pertains to my virtual environment and my VMs.  If you haven’t used it definitely check it out – if you have, upgrade – you can see all of the new changes and download here!!

KEMP giving away LoadMaster!

kempKeeping on the topic of free tools let’s talk about KEMP for a moment!  They are now offering their flagship KEMP LoadMaster with a free tier!  If you need any load balancing done at all I would definitely check this out!  Now, there is going to be some limitations right, nothing in this world is completely free 🙂  Certainly it’s only community supported and you can only balance up to a maximum of 20 MB/s – but hey, may be a great solution for your lab!  Eric Shanks has a great introduction to how to get it up and going on his blog so if you need a hand check it out!  I’ve also done up a quick review a few months back on load balancing your LogInsight installation with KEMP.  Anyways, if you are interested in checking it go and get yourself a copy!

You got your snapshot in my VVOL!

As my mind wanders during the tail end of the NHL season I often find my mind racing about different things during the commercial breaks of Habs games – this time I said to myself, self, do snapshots work the same when utilizing the new VVOL technology.  Then myself replied and it said, hey self, you know who would know this answer, Cormac Hogan.  A quick look at his blog and low and behold there it was, a post in regards to shapshots and VVOLs.  If you have some time check it out – Cormac has  a great  way of laying things out in quick and easy to follow blog posts and this on is no exception.  In fact, before the first place team in the eastern conference returned from the tv timeout I had a complete understanding of it – now, back to our regularly scheduled programming.

 #SFD7 – Did you see it?

SFD-Logo2-150x150It appears that most if not all the videos from Storage Field Day 7 have been uploaded from the Silicon Valley internets into the wide world of YouTube!  There was a great list of delegates, vendors and presenters there so I would definitely recommend you check them out!  There were crazy hard drive watches, fire alarms, and best of all, a ton of great tech being talked about!  IMO the show could of done with just a few more memes though 🙂  With that said you can find all their is to know about Storage Field Day 7 over at GestaltIT’s landing page!

Rock the vote! Top vBlog voting now open!

2014_Award-Banner_Top-25-smallIt’s that special time of year again – a time for the virtualization community to come together and vote for their favorite virtualization blogs.  Yes – the Top vBlog Voting for 2015 is underway over at vSphere-land.com.  As much as this is just simply a ranking of blogs I’m not going to lie – it feels great to be recognized for all the work that I put into this blog and I appreciate each and every vote and reader that I have here on mwpreston.net.  This will be my forth year participating in the Top vBlog voting and honestly I’m so humbled by the way things have turned out.  In 2012 I put myself out there in the contest and came in at #125, 2013 I moved up to a whopping #39, and last year, 2014 I landed in spot #20 (wow!)  Thank you all for the support!

That’s one small step for man, one giant…I have a dream!

black-sheep-1996-02-gI know the sub title above doesn’t make much sense but wanted to somehow sneak a picture of Farley into this post, so there’s that!  – Seriously though, if you are a reader of this blog, or any blog on the vLaunchpad for that matter be sure to get over to the survey and vote!  Help pay respects and give recognition to the bloggers that spend countless hours trying to bring you relevant and useful information.  Be sure to read this post by Eric Siebert outlining a few tips and things to keep in mind while voting.  This isn’t a popularity contest – vote for the blogs you feel are the best – and if you aren’t sure, take a look back at some of the content they’ve produced over the past year.  Eric has links and feeds to over 400 blogs (insane!) on the launchpad if you have a spare 3 or 4 days 🙂

Speaking of Eric

Don’t forget to give huge thanks and props out to Eric for the time that he spends putting this thing together.  I can’t imagine the amount of work that goes into maintaining something like this.  Honestly I don’t know how he keeps up with it all, the linking, etc.  I have a hard enough time going back through my drafts and creating hyperlinks 🙂  So props to you Eric and Thank You!  Also, reach out to the wonderful folks at Infinio and thank them for once again sponsoring the Top vBlog Voting!  A lot of what goes on within the community wouldn’t be possible without sponsorships and help from all of the great vendors out there!

You have until March 16

That’s right, this whole thing wraps up on March 16 so make sure you get your choices in before then.  You will find mwpreston dot net front and center on the top of your screen once you start the survey (just in case you are looking for it :)).  Obviously I’d appreciate a vote but be true to yourself, if you don’t think I deserve it, skip me and move on to someone you think does 🙂

mwpreston dot net vote

I tend to use the Top vBlog Voting as a time to reflect back on what I’ve accomplished over the last year and 2014 was a super one for me!  I had the chance to attend a couple new conferences –  VeeamON, and Virtualization Field Day 4 – all of which I tried my best to cover on this blog.  I’ve also been doing a lot of writing for searchVMware.techtarget.com which has been a blast (if you are looking for a best news blog vote, check them out).  No matter where I end up it’s simply an honor to be part of this community and to have made so many new friends from across the world!  So here’s to an even better 2015

Share All Of The Content – Automation around sharing community blogs on Twitter

sharememeAttending an X Field Day event for me has been awesome – there are a ton of perks, you get to hear deep-dives directly from vendors, usually involving CTO/CEO/Founder type people.  You get to meet an amazing panel of fellow delegates and develop friendships.  But aside from all this there is one huge benefit that usually goes un-blogged; and that is you get to hear stories and experiences from Stephen Foskett himself – and he has a lot of them.  One in particularly caught my attention – as he explained all of the behind the scenes automation that occurs in terms of building the TFD webpages and sharing all this information out on his respective social media channels.  So, as soon as we, as a delegate click ‘Publish’ there is a ton of IFTTT recipes, Zapier functionality and custom scripts that takes our posts, tags relevant vendors/authors and kicks off the craziness that is Foskett’s brain in code.  It’s really quite amazing.  So amazing that I thought I might try my hand at my own.  Now I am by no means at the level that Mr. Foskett is at – but it’s a start – and here’s how it goes…

My Brain

So what I set out to accomplish was simple.  I wanted to be able to flag, Digg, or star (chose whatever terminology for the RSS reader of your choice) blog posts as I read them – ones that I thought were awesome – From there, the posts would be transformed into a tweet, mentioning the author and sent out on my twitter account at a later time, keeping in mind I would need to buffer these as I could be “digging” 20 posts at a time.

My Brain on Code

So here is how I accomplished that task of taking those random ideas from my brain and transforming them into code.  There is a few pre-reqs and different technologies used, so ensure you have these if you plan on duplicating this.

  • Twitter – yeah, you need a Twitter account.
  • Twitter App – you need to setup an app within that Twitter account – this is what will allow the rest of this craziness to actually connect into your account and send the tweet – we will go over the how to on this
  • Google Drive – The core scripting behind all of this is done in Google Script in behind a Google Spreadsheet – so, you will need a Google account.
  • Digg – I use this as my RSS reader so if you are following along step by step you might want to set this up.  If you use another, I’m sure you can figure out how to get your favorite posts from the reader into Delicious
  • Delicious – I implemented this as a middle man between my RSS reader and the Google Spreadsheet simply due to the fact that I may want to share out content that isn’t inside of my RSS reader.  I can easily add any content into Delicious.
  • IFTTT – You will also need an If This Then That account setup as we will be using recipes to move content from Digg into Delicious, and further more from Delicious into the Google Spreadsheet.  I use IFTTT for a ton of other “stuff” to make my life easier.  You should already have an account here setup Smile

So, the concept is as follows

  1. I “digg” a post within digg – IFTTT then takes that post and creates a Delicous bookmark with a specific tag “ShareOut”  I could also just use the Delicious chrome plug-in to tag any current site that I’m on as a bookmark as well.
  2. IFTTT then takes any new public bookmarks with the “ShareOut” tag that I have created and adds them into a Google Spreadsheet, containing the blog Title, URL, and a “0” indicating that I haven’t processed this yet.
  3. The spreadsheet contains a “CheckNew” trigger/function, which runs every x minutes/hours.  This takes any rows with a “0” (new ones from Delicious) and transforms them into a tweet, shortening the URL, ensuring we are under the 140 characters and adding the authors Twitter handle.  It then places this tweet into the next blank row on the “Tweets” sheet and updates the processed column to “1”
  4. The spreadsheet contains a “SendTweet” trigger/function, which runs once an hour – this simply takes the first tweet on the “Tweets” sheet and tweets it out, then deletes it from the spreadsheet, allowing the next Tweet to move up in the queue and be processed in the next hour, repeat, repeat, repeat.  Every hour until the sheet is empty.

So let’s set this stuff up.

First up we need the spreadsheet created within Google Drive – you can copy mine if you like as it already has a sheet which maps the top 100 vBloggers (and some more) blogs to Twitter (this is the only way I could figure out how to map Twitter handles to blogs).  Either way it will need to be setup the same if you create it new.

Next, setup the recipes in IFTTT to copy Diggs to Delicious and subsequently Delicious to Google Spreadsheet –  IFTTT is pretty easy to use so  I’ll leave  it up to you to figure out how to get the data to Delicious and the spreadsheet.  Just ensure if you are using “Add Row to Spreadsheet” as a that – that you populate all three columns in the spreadsheet in the following order (Title, URL, 0) – The “0” needs to added in order for the Google Script to function properly.  Let me know if you need a hand.

Now we need to setup an app to allow Google script to send the Tweets for us.  Log into your Twitter account and head to http://dev.twitter.com/apps/ .  Once there click the “Create New App” button in the top right hand corner.  Most of the information you put here doesn’t matter, with the exception of the Callback URL – this needs to be “https://script.google.com/macros/”

twitterapp

Once created, click on the Permissions tab and ensure that the access is set to “Read and Write”

permissions

Now we need to get the Consumer Key and Consumer Secret – This is on the “Keys and Access Tokens” tab and we will need to copy to use within our Google Script later, so shove it over to notepad or vi or something Smile

tokens

Now we are done with Twitter and it’s time to get into the good stuff!  Open up your Google Spreadsheet and make sure you have the following sheets within it.

  • Sheet1 – named exactly this and it will be a place holder for the information coming from IFTTT
  • BlogTwitter – This sheet contains all of the Blog->Twitter handle mappings.
  • Tweets – This sheet will be a place holder for our Tweets

Again, feel free to simply copy my spreadsheet – it may make it easier and already has the BlogTwitter sheet populated.

As far as just setting up the sheets with the above names in the same order there is nothing we really need to do on the Spreadsheet – it’s the code behind we really need.  To get here, select Tools->Script Editor.  When the dialog appears select “Blank Project” under “Create New Script For”.  If you copied my sheet you will simply be brought into an already existing blank project.

Before we can get started with the code there are a couple of things we need to do.  Since I use the Google URL shortening service you will need to enable this in the project resources.  This is done under Resources->Advanced Google Services”  Find the URL Shortener API and switch it to On.  You will also need to turn this service on within the Google Developers Console – the link to do so is right within that same dialog – go do that.

shortener enable

So, as far as the code goes I’m just going to dump it all right here so you can just copy/paste all of it – I’ll explain a few things about it underneath.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
function sendTweet(){
 
var TWITTER_CONSUMER_KEY = 'CONSUMERKEYHERE';
var TWITTER_CONSUMER_SECRET = 'CONSUMERSECRETHERE';
var oauth = false;
 
function authTwitter(){
 
var oauthConfig = UrlFetchApp.addOAuthService('twitter');
oauthConfig.setAccessTokenUrl('https://api.twitter.com/oauth/access_token');
oauthConfig.setRequestTokenUrl('https://api.twitter.com/oauth/request_token');
oauthConfig.setAuthorizationUrl('https://api.twitter.com/oauth/authorize');
oauthConfig.setConsumerKey(TWITTER_CONSUMER_KEY);
oauthConfig.setConsumerSecret(TWITTER_CONSUMER_SECRET);
};
 
var requestData = {
'method': 'POST',
'oAuthServiceName': 'twitter',
'oAuthUseToken': 'always'
};
 
var ss = SpreadsheetApp.getActiveSpreadsheet();
ss.setActiveSheet(ss.getSheets()[2]);
 
var sheet = ss.getSheetByName("Tweets");
var tweet = sheet.getActiveCell().getValue();
var encodedTweet = encodeURIComponent(tweet);
 
if (tweet!='') {
 
if (!oauth) {
authTwitter();
oauth = true;
};
 
UrlFetchApp.fetch('https://api.twitter.com/1.1/statuses/update.json?status=' + encodedTweet, requestData);
 
sheet.deleteRow(1);
 
}
 
};
 
function checkNew()
{
var ss = SpreadsheetApp.getActiveSpreadsheet();
var iftsheet = ss.getSheetByName("Sheet1");
var values = iftsheet.getDataRange().getValues();
for(var i=0, iLen=values.length; i<iLen; i++) {
if(values[i][2] != "1") {
 
// get Twitter Hangle
var urlPath = values[i][1].split("/");
var baseURL = urlPath[2];
var twitterHandle = findTwitterHandle(baseURL);
 
// get other data
var postTitle = values[i][0];
var postURLLong = values[i][1];
 
var URLShort = getShortenedUrl(postURLLong);
// build tweet string
var myTweet = buildTweet(postTitle, URLShort, twitterHandle);
 
// place variable in next available row on Tweets spreadsheet
var targetSheet = ss.getSheetByName("Tweets");
var lastRow = targetSheet.getLastRow() + 1;
var targetCol = "A" + lastRow.toString();
targetSheet.getRange(targetCol).setValue(myTweet);
values[i][2] = "1"
}
} 
iftsheet.getDataRange().setValues(values);
}
 
function getShortenedUrl(url){
 
var longUrl = UrlShortener.newUrl();
longUrl.setLongUrl(url);
 
var shortUrl = UrlShortener.Url.insert(longUrl);
 
return shortUrl.getId();
}
 
 
function buildTweet(postTitle, postURL, twitterHandle)
{
var tweet = "[Shared] " + postTitle + " - " + postURL;
if (typeof twitterHandle != "undefined")
{
tweet += " via " + twitterHandle;
}
 
var tweetlength = tweet.length;
 
if (tweetlength > 140)
{
var charsToTrim = tweetlength - 135;
 
postTitle = postTitle.substr(0, postTitle.length-charsToTrim);
tweet = "[Shared] " + postTitle + " - " + postURL;
if (typeof twitterHandle != "undefined")
{
tweet += " via " + twitterHandle;
}
}
return tweet;
}
 
function findTwitterHandle(blogurl) {
var twitterSheet = SpreadsheetApp.getActiveSpreadsheet().getSheetByName("BlogTwitter");
var values = twitterSheet.getDataRange().getValues();
 
for(var i=0, iLen=values.length; i<iLen; i++) {
if(values[i][0] == blogurl) {
return values[i][1];
}
} 
 
}

Copy and paste all of the above code into the blank code.gs file that was created.  First up, remember that consumer key and secret from Twitter – yeah, they will need to go in their respective spots in lines 3/4.  That is really all of the edits to code you need to make.  If you look at the script there are a few different functions.  checkNew() (Line 45) – this is what takes the data from IFTTT on Sheet1 and transforms it into tweet-format, then places it on the “Tweets” sheet.  You can see it calls out some other functions which shorten the URL and ensure the tweet is under 140 characters, as well as flag a 0 to 1 on Sheet1 (ensures that we don’t’ tweet the same thing twice).   The sendTweet() function (Line 1) – this takes whatever is in A1 on the Tweets sheet and, you guessed it, tweets it.  When its done, row 1 is deleted, allowing the next tweet to move into A1 and be processed next time the function runs,

To test, put some data in the first sheet, Column A – the blog post title, Column B, the blog post URL, and a “0” in column C.  You can do this manually for now if you want or if you have the IFTTT recipes setup let them take charge.

Then, at the top of the script editor change the function drop-down to “checkNew” and click “Run”.  If you don’t see the functions listed within the drop-down you may need to “File-Save” and completely close out both the script editor and the spreadsheet.

test1

Once this has complete its’ run you should be able to flip back to the spreadsheet and see a Tweet sitting inside A1 of the “Tweets” sheet.  The data from “Sheet1” should also be updated with 1 flag.

From here it’s a matter of flipping back to the Script Editor and running the “sendTweet” function the same way you did the “checkNew”.  You will most likely be prompted to authorize Google as well as authorize Twitter during this process.  Go ahead and do that.  Sometimes I have found that the first time you run it you will need to authorize Google, then you will need to run it again to authorize Twitter.  Once both applications are authorized your tweet should have been sent out!

So this is all great, everything is working.  Now to setup triggers for these functions as running them manually doesn’t make much sense.  To get into your triggers select Resources->All your triggers from the script editor.

triggers

As you can see I have set up two.  One that runs my checkNew function every 30 minutes – if it finds a newly inserted row from IFTTT it will build the tweet for me.  The other, sendTweet runs once an hour – this will simply take one of the tweets and send it out!  This way, if there are tweets available it will tweet just one per hour, so I don’t flood everyone with tweets!

And that’s that!  I know this was a long winded post but I wanted to convey what I was trying to accomplish and how I did it!  Hopefully some of you find it useful if you are looking to deploy a similar type solution.  There is a ton of great content out there and this is just my part on helping to spread the good word within the virtualization community.

If you are trying to get through this and need a little help don’t hesitate to reach out – I’m more than willing to help!  If you have any ideas on how to better this process I”d love to hear them as well.  The part currently nagging at me is that manual BlogTwitter sheet – not sure how this part could be automated or bettered but if you have ideas let me know…  Thanks for reading!  And sharing Smile

The Software Defined VMUG Advantage

vmugI’ve been a VMUG Advantage member going on three years now so I’ve experienced first hand some of the great value that it brings along with it.  Basically, if you plan on attending VMworld and doing a couple of exams or training courses throughout the year the VMUG Advantage discounts offered more than pay for the program fees.  The full VMUG Advantage beneiftis can be seen here, but for laziness sake, here’s a quick outline of the “biggies”.

  • $100 discount on VMworld
  • 20% off VMware delivered classes
  • 20% off VMware certifications
  • Access to all VMworld online content
  • 50% off VMware Workstation/Fusion
  • Discounts on VMware Learning Zone, VMware On-Demand, Lab Connect and more.

With that said, those that can’t attend VMworld or are not planning on taking any VMware training or exams might find themselves scratching their heads wondering what real value the VMUG Advantage program can bring to them.  And honestly, before now their wasn’t a whole lot.

Enter EVALExperience

New in 2015 VMUG Advantage subscribers will now have access to EVALExperience!  EVALExpereince provides subscribers with exclusive access to certain pieces of VMware software, coupled with a 365 day non-production NFR license.  Basically, VMUG Advantage subscribers will be able to download and use nine different pieces of VMware software in their home labs in order to explore new features, gain hands on experience and further educate themselves on the offerings.  So what’s included?  Well, it’s not a simple lightweight set of software – there is some expensive sweet applications included (shown below) that should be able to keep you busy for the year

evalexperienceproducts

VMware vCenter Server 5 Standalone. VMware vSphere w/ Operations Management (Enterprise Plus), VMware vCloud Suite Standard, VMware vRealize Operations Insight, VMware vRealize Operations 6 Enterprise, VMware vRealize Log Insight, VMware vRealize Operations for Horizon, VMware Horizon Advanced, VMware Virtual SAN

To me, EVALExperience is a solution to a couple of gripes within the community.  First, it adds that extra value to those VMUG Advantage subscribers that don’t attend VMworld, or don’t take any official training or certifications.  IMO the ability to evaluate and download software for longer than the usual 60 day trial period is well worth the $200 price tag of VMUG Advantage.  Secondly, it provides somewhat of a replacement to the VMTN that VMware used to offer.  The VMTN which offered a similar type solution of downloading and using NFR licenses of VMware’s products was put to rest a number of years back.  The community for a while now have been trying to get VMware to reinstate the VMTN – this to me, along with teaming up with VMUG, answers those screams about the VMTN.

So, if you are already a VMUG Advantage subscriber you should have received an email outlining how you can gain access, if you aren’t, go and sign up here.  More info on the EVALExperience program can also be found here.  For now, a big thanks to VMUG and Happy Testing!!!

What’s in a name? – The importance of naming conventions

namebookAll to often I find myself in the following situation – I’m in the midst of deploying a new service or project into our environment.  I have gone through tons of user manuals and articles, went through weeks of training and technical tutorials, successfully completed proof of concepts and pilots, yet as I’m beginning the production deployment I sit at my desk puzzled, staring into space in deep thought, perplexed by the multitude of options running through my mind over what I am going to name this server.  In the Windows world, those 15 simple characters allowed in the NetBIOS name puts my mind in such a tailspin – sure, we have our own naming conventions for servers, first three digits are dedicated to a location code, followed by a dash, then whatever, then ending in a “-01” and increment – so that leaves me now with 8 characters to describe the whatever part that I’m deploying, and in some instances, it’s those 8 simple characters that tend to be the most difficult decision in the project – even with somewhat of a server and endpoint naming convention in place.

But it goes beyond just servers and endpoints.

Sure, most companies have naming processes in place for their servers and workstations – and I’ll eventually come up with something for those 8 characters after a coffee and some arguments/advice from others.  The most recent struggles I’ve had though, apply not to servers, but to inventory items within vSphere.  There are a lot objects within vSphere that require names – Datastores, PortGroups, Switches, Folders, Resource Pools, Datacenters, Storage Profiles, Datastore Clusters…..I can go on for quite some time but you get the point.  All of these things require some sort of a name, and with that name comes a great deal of importance in terms of support.  For outsiders looking in on your environment and their understanding of what something is, as well as for your own “well-being” when troubleshooting issues – a name can definitely make or break your day.

So how do we come up with a naming convention?

boy-61171_640This sucks but there is no right or wrong answer for your environment – it’s your environment and you need to do whatever makes sense for you!  Unlike naming children we can’t simply pick up a book called “1001 names for vSphere Inventory Items” and say them out loud with our spouses till we find one that works – we need something descriptive, something we can use when troubleshooting to quickly identify what and where the object is.

Back to my recent struggle – it had to do with datastores.  For a long time I’ve had datastores simply defined as arrayname-vmfs01 and increment.  So, vnx-vmfs01, vnx-vmfs02, eva-vmfs01, etc…  This has always been fine for us – we are small enough that I can pretty much remember what is what.  That said, after adding more arrays and  a mixture of disk types (FC, SAS, NLSAS, SATA) I began to see a problem.  Does eva-vmfs01 sit on SAS or SATA disks?  Is it on the 6400 or 4400?  Is it in the primary or secondary datacenter?  Is this production or test storage?  Does it house VMs or backups?  What is the LUN ID of eva-vmfs01?  I know – most of the questions can be answered by running some CLI commands, clicking around within the vSphere client or performing a little more investigation – but why put ourselves through this when we can simply answer these questions within the objects name?

So I set out to Twitter, asking if anyone had any ideas in regards to naming conventions for datastores – and I got a ton of responses, all with some great suggestions, and all different!  So to sum it all up, here are a few suggestions of what to include in a datastore name that I received.

  • Array munufacturer/model/model number
  • Disk Type (SAS, FC, NLSAS, SATA, SCSI)
  • Lun Identifier/Volume Number
  • Destination Type (Backups, VMs)
  • Storage Tiers (Gold, Silver Bronze/Tier 1, Tier 2, Tier3)
  • Transport type (iSCSI, NFS, FC, FCoE)
  • Location/Datacenter
  • Raid Levels
  • vSphere Cluster the datastore belongs to
  • Of course, Description

Yeah, that’s a crap-load of information to put in a name – and maybe it makes sense to you to use it all but in my case it’s just too much.  That said, it’s all great information, the toughest part is just picking the pieces you need and arranging them in an order that you like.  Having something named DC1-TOR-HP-EVA-6400-FIBRE-FC-R6-VM-GOLD-CLUSTER1-L15 is a little much.

And in the end

So what did I chose?  Well, to tell you the truth I haven’t chose anything yet.  I thought by writing this post I might spark some creative thinking and it would pop into my head – but no – I’m more confused than ever.  Honestly, I’m leaning towards something like array-disktype-lunid, like EVA-FIBRE-L6 or VNX-NLSAS-L4, but I just don’t know.  This stuff is certainly not a deal breaker, but I’m just trying to make my life a bit easier.  If you think you have the end all be all of naming conventions feel free to leave a comment!  I’m always open for suggestions!

#VFD4 – A Canadian in Texas?

VFD-Logo-400x398I know, I didn’t leave much to the imagination with the blog title and as you may of guessed I’m going to be attending Virtualization Field Day 4 in Austin, Texas this January!

I was ecstatic when I received the invitation and it didn’t take much convincing to get me to go!  I’ve been a huge fan and supporter of the Tech Field Day format over the years, and not too many of them go by where I don’t catch a few session on the livestream.  The fact that Austin is on average 30 degrees Celsius warmer than here in January sure does help too!

The event

Aside from the heat I’m definitely looking forward to being a part of VFD4.  This will be the fourth installment of Virtualization Field Day and it takes place January 14th through the 16th in Austin, Texas.  The Tech Field Day events bring vendors and bloggers/thought leaders together in a presentation/discussion style room to talk everything and anything about given products or solutions.  I’ll point you to techfieldday.com to get a way better explanation about the layout of the events.

The Delegates

This will be my first time as a delegate and I’m feeling very humbled for having been selected.  Honestly I get to sit alongside some of the brightest minds that I know.  Thus far Amit Panchel (@AmitPanchal76), Amy Manley (@WyrdGirl), James Green (@JDGreen), Julian Wood (@Julian_Wood), Justin Warren (@JPWarren), and Marco Broeken (@MBroeken) have all been confirmed as delegates with more to be announced as time ticks on.  Some of these people I’ve met before, some I know strictly from Twitter and others I haven’t met at all so I’m excited to catch up with some people as well as meet some new people.

The Sponsors

So far there have been 6 sponsors sign up for #VFD4 – Platform9, Scale Computing, Simplivity, Solarwinds, StorMagic and VMTurbo.  Just as with the delegates some of these companies I know a lot about, some I know a little, and others I certainly need to read up on.  Having seen many, and I mean many vendor presentations in my lifetime I have tremendous respect for those that sponsor and present at Tech Field Day.  The sessions tend to be very technical, very interactive, and very informative – three traits that I believe make a presentation.  I’m really looking forward to seeing my fellow Toronto VMUG Co-Leader and friend Eric Wright (@discoposse) sitting on the other side of the table 🙂

Be sure to follow along via Twitter by watching the #VFD4 hashtag leading up to and during the event.  Also a livestream will be setup so you too can watch as it all goes down.

I’m so grateful to everyone for getting this opportunity – so thank you to my peers, the readers of this blog, Stephen Foskett and all the organizers of all the great Tech Field Days and the virtualization community in general – See you in Texas!

Get the cobwebs out of your Data Center with CloudPhysics

Just the other day I was thinking to myself you know what this world needs – more Halloween themed infographics relating to IT.  Thankfully, CloudPhysics, with their analytic powers pulled that thought out of my head and did just that!  With a vulnerability dubbed Heartbleed, how can you resist it?

On a serious note some of the data that CloudPhysics has should really scare you.  22% of our vSphere 5.5 servers remain vulnerable to Heatbleed! 41% of clusters do not have admission control enabled!   These are definitely some spooky stats and shouldn’t be ignored!

CloudPhysics-Halloween-2014

But what’s Halloween with out some tricks and treats right?

CloudPhysics has you covered there as well!   Be sure to grab their Halloween Cookbook – A collection of tips and tricks on how you can remediate issues within your data center and stay out of the spooky stats that they are collecting.  For an even bigger treat, be sure to sign up to allow CloudPhysics to help find the data center goblins for you – oh, for free!  Better yet, sign yourself up for the community edition of CloudPhysics – it’s completely free and can give you some great insight into what’s going on inside your virtual data center!  Be sure to head to their official blog post to get all in the information you need to fill up your bag!

Talkin’ smack on thwack

solarwinds-thwack-online-communityOK OK, I’m not really talking smack – this is simply my attempts at coming up with catchier blog titles and that rhymed so I thought it would be a good idea – nonetheless – I feel like it failed..

But down to business – Solarwinds and Stephen Foskett have granted myself with the honor of being a thwack ambassador for the month of September – and my topic – database analysis and performance.  I know what you might be thinking its odd for this virtualization geek to be talking about database performance, but the fact of the matter is I’m responsible for many databases within my day job so I couldn’t be more excited!  Excited to share what I know in an area that isn’t directly related to virtualization – but even more so, excited to learn more about an area that I feel I could improve in.

So, if you have some time head on over to the thwack community and check out my first two posts; “It’s always the databases fault” and “Making performance metrics make sense to your business“.  Leave a comment and you may just win yourself a jambox!

thwack isn’t just about giving away a prize to hammer you with info on Solarwinds products -Honestly I’m very impressed with the the content over on thwack!  There are some great conversations going on over there dealing with everything from virtualization monitoring to cloud to mobile to, well, database analysis.  There’s a lot to learn and thwack is definitely a great community to join in order to not just get answers to your problems, but to genuinely expand your knowledge – Be sure to check it out!

mwpreston at #VMworld

With less than a week until the big show and only three days before I hop into the big tin can to get there I thought a post in regards to my “planned” VMworld experience is in order!  I say planned since VMworld is always a crazy experience and things can change quickly with so much to do and learn. That said I do know of a few items in my calendar that are set in stone!

Opening Acts

This is a great idea for a conference primer that is taking place this year!  The VMunderground team, along with the vBrownbag crew have set the stage for some great knowledge dropping fun Sunday before VMworld.  I’m happy to say that I will be taking a spot on the last panel of the day, dealing with Architecture and Infrastructure.  Needless to say I’m excited to be taking part but what I’m most excited about is the other rockstars I’ll be sitting next to – Melissa Palmer (@vmiss), Phoummaia Schmitt (@exchangegoddess), Maish Saidel-Keesing (@maishsk) and John Arrasjid (@vcdx001) and moderated by Matt Liebowitz (@mattliebowitz).  Umm, yeah, you read that right – the FIRST VCDX!  I’ll certainly be sitting on the shoulders of giants during this one and hopefully have a little to contribute, but am mostly looking forwarded to learning from the best!.  Opening acts kicks off at 1PM at City View at Metreon with our panel starting at 3pm.

Book Signing

Yeah, so not only do I get to partake in a panel session with some of the brightest virtualization minds in the world, I get to also sit down and sign books for a half an hour – Each and every VMworld experience seems to be topping the previous ever since I’ve been attending.  Anyways, I’ll be at the VMworld bookstore from 1:00PM to 1:30PM on Tuesday if you want to have  your copy of Troubleshooting vSphere Storage signed.  Honestly, this is a first for me so I have no idea what to write in a book and Google hasn’t been much of a help with this one!  Either way, if you want the book signed, any other book signed Smile, or just want to chat come on by and I’ll be there!

Other Stuff

booksHey, I’ll also have a dozen or so copies of Troubleshooting vSphere Storage to give away so if you are looking for one come and find me – I may have one on me at the time and if I do it’s yours!  Where might you find me?  Well, you can bet your a$$ that I will be at VMunderground on Sunday!  Monday, probably the VMUG leader reception and vFlipcup.  Tuesday, VMUG leader lunch, the vExpert reception, CTO party, Veeam – ugh!  Exhausted just thinking about all of this!  Wednesday I will be sure to hit up the VMware Canada reception eh!  Then move on to of course, the VMworld party!!!  Also you can find me in the hang space and blogger tables periodically throughout the conference.

stickers

On another note, I have a ton of these mwpreston dot net stickers to hand out as well (Don’t worry, I’ll cut them) – I’ll probably just scatter them throughout the place but if you can’t find one and really really want one for some odd reason, just ask!   It’s my crazy shameless self promotion plug of a way of saying thank you to all of you for making this blog what it is today!

Anyways, there’s lots to do and lots to pack so I’ll leave it at this!  Can’t wait to see everyone again – the community really makes the VMworld experience for me – if not for community, it’d just be another conference!