Friday Shorts – Veeam Endpoint Multi-boot, Community Awards, New Pernix UI and more…

It became very clear to me sitting out there today that every decision I’ve made in my entire life has been wrong. My life is the complete opposite of everything I want it to be. Every instinct I have, in every aspect of life, be it something to wear, something to eat – it’s all been wrong. – George Costanza

Multibooting Veeam Endpoint USB

veeam_logoVeeam has a pretty nifty little product in their Endpoint Backup solution.  Honestly just the other day I was wondering if it was still installed on my laptop as I haven’t even noticed it at all – sure enough there it was, quietly doing its’ thing.  Anyways, Vladan Seget has a great article on his blog in regards to creating a multi-boot USB stick w/ the Veeam Endpoint Backup recovery ISO’s on it in order to support various different hardware and laptop flavors!  Something to definitely check out if you manage multiple different hardware platforms and want to use VEB to protect them all!

2016 Vendor Community Awards

There are a ton of community award/recognition programs being run by vendors these days.  It seems like almost everyone is trying to recognized the hard work that community leaders, bloggers, and evangalists alike are putting in to help spread the word about everything tech.  That said, after seeing Andrea Mauro’s post about the programs upcoming for 2016 I realized I didn’t even know about all of them.  If you are interested in applying for a program, or just want to know more about them head over to Andrea’s blog and check it out!

Automate the answering of questions!

There is nothing more enraging than writing a great big long automation script only to find the vSphere client sitting at a prompt waiting for you to answer some kind of stupid question!  I’ve never been able to find a way to work around some of these issues but after seeing Luc Deken’s post in regards to answering the infamous CDROM unmount question I might have a push in the right direction!  Anyone dealing with automation and PowerCLI should really be following Luc and certainly check out his blog!

Testing JSON Syntax!

When dealing with a lot of API calls, especially when trying to form your own body for one it can sometimes be a little monotenous trying to find an error or test certain JSON syntax that you may have created.  Jonathan Medd has a great blog dealing with all that is PowerShell and his latest Quick Tip, Testing JSON Syntax walks us through a quick and easy way to make sure that none of our JSON is malformed – and if it is how to quickly find out where the problem lies!

While we are talking about automation!

Sometimes I wish I had more time to spend within the VMware Hands on Labs environment – there is a ton of cool technology up there available to play with absolutely free of charge!  I’ve done a dozen or so labs in my time, mostly centering around newly released products or things that spark my interest.  What I didn’t know of is that there are labs there centered around VMware’s development tools and their respective SDK’s.   I really need to set aside a few hours to have a look at these as it’s something I struggle through everytime I attempt to utilize them!

PernixData and their new UI

PernixData_Logo_ColorI have always been a fan of products with a clean, crisp, usable UI.  I like whitespace and I like intuition and when attending Tech Field Day presentations it’s always the first thing I see and really sets the stage for the whole presentation!  I saw Pernix at VFD5 along with their newly redesigned UI and it did not dissapoint!  Pete Koehler (@vmpete) has a great post on his blog covering almost everything there is to know about the new PernixData UI – why they went there, what it involves, and what are some of the goodies to really focus on!  If you are a fan of Pernix, or simply a fan of creating beautiful interfaces check out Pete’s post!

Embedded to External, External to Embbeded – It’s all possible now!

VMware LogoFor those that made the jump to vSphere 6 before Update 1 was released you may have noticed some odd and annoying limitations during the upgrade – the first being their was no “supported” way to upgrade directly from 5.5 embedded SSO to a 6.0 external PSC.  You had to first break out your 5.5 SSO to another box and then proceed with the upgrade – it was just a big PIA to tell you the truth.  Along with Update 1 came some tools that allow us to simply repoint and reconfigure our vCenter Servers to new PSC’s, which essentially allow us to perform the embedded upgrade and then repoint to a newly installed external PSC – a welcome addition!  If you want to learn more there is a great post by Ryan Johnson on the vSphere blog outlining all of the scenarios and commands you need to run to repoint and reconfig!

Disclose all the things!

allthethingsThere are few things in this world that amuse me to the level that Justin Warren’s disclosure posts do and his latest in regards to VMworld 2015 does the trick just the same!  Justin is a great writer and I follow his blog religiously – he has lots of excellent posts is a very smart man with an interesting take on everything – including disclosures!  Justin spares no attention to detail in these types of posts with disclosures of food (” Some nice roast chicken and vegetables from somewhere local paired with Diet Coke. There was pie, but I had a cookie instead”) and schwag (“EVO:RAIL threw a cap at me, and I grabbed some stickers, one of which is on my laptop. The cap will go into the cap drawer because my wife says I’m not allowed to wear baseball caps.”) alike!!!  Aside from these being incredibly amusing it does have an impact on you – just think of all the small, little things that you receive during a conference and how they may influence you!

Update m1000e/VRTX firmware with a network share from the Dell Repository Manager

dwight-frustratedIf you’ve ever tried to tackle all the firmware on an m1000e/VRTX and its’ respective blades you probably know what a hot mess it can be – using various different methods to update different pieces of hardware, some requiring a Live CD, some requiring a boot-able USB key, some requiring you to extract an EXE and find secret payload files,  and some being installed through the GUI.  It’s a full time job just to keep track of all the different pieces of firmware and how they are installed.  Now in order to help minimize this Dell has introduced the Dell Repository Manager – an online repository that will fetch those updates that you need and serve them up to your CMC controllers for installation.  The CMC can then go and fetch these firmware updates from DRM and apply them in an ordered, staged, and automated fashion!

Oh, it sounds so picture perfect doesn’t it!  The fact of the matter is though if you have ever tried to work with any of these update/firmware management products, be it Dell, HP or anyone else you know that they are not as intuitive and easy to use as they advertise – that and they are constantly being updated and best practices are constantly changing!  It’s a moving target for sure!  That said taking the time to set it up properly still far outweighs the pain of having to hit the Dell support site, pulling down individual firmware packages and processing them manually – so why not spend the time now which will hopefully save you some time later – I’m lazy by nature, and have followed the following steps to make it work for me!

Install and Configure Dell Repository Manager

First up we need to setup the Dell Repository Manager (DRM) – you should be able to find the downloadable msi under the Systems Management portion of any of your supported Dell products on their driver download page.  The install itself requires just a few clicks of the ‘Next’ Smile

There is little configuration to do in order to get the DRM functional.  Basically we just need to sync the Dell online database with our local install of DRM.  To do so, select ‘Source->View Dell Online Catalog’.  In the dialog box shown simply click ‘Yes’ to update your database.


After a few minutes of ‘Reading Catalog’ and ‘Writing data to database’ we should be good to go to continue with the creation of our repository.

Creating your m1000e repository

Now it is time to create a new repository which will pull down the updates for the hardware existing within our m1000e.  In order to do this we will need to export the inventory of our CMC to a file to be used to import into the DRM.  To do this head to the Update tab within the CMC interface (Chassis Overview->Server Overview->Update).  Select ‘Update from Network Share’ under Chose Update Type and then click ‘Save Inventory Report’.


Doing this should save a file (Inventory.xml) to your local harddrive – this file contains the inventory of the blades and what is inside of them in terms of hardware and needs to be copied over to your DRM server.  Now we can proceed to create a new repository based off of our Inventory.xml file as shown below…


Within DRM select Repository->New->Dell Modular Chassis Inventory.  Give your repository a proper name and description.


Select ‘Dell Online Catalog as our base repository.


Point to the location where you have copied the Inventory.xml file and ensure that ‘Latest Updates for all devices’ is selected.


On the Summary screen ensure that all of the OS components are selected.  This just ensures that no matter what OS we have on the blades (Linux, Windows, ESXi) we will get the proper firmware packages needed to deploy.

After a few minutes we should be redirected back to our main screen of DRM with the focus on our newly created repository.  The next thing we need to do is to export this repository into some sort of deployable format that can be consumed by our servers and chassis.  To do so, make sure that all of the bundles listed are checked and select ‘Create Deployment Tools’ in the top right hand corner.


Here is where we determine what type of deployement tool to create – you can see we can create a bootable ISO, a SUU, etc.  Since we will be installing from a network share we need to create a catalog, so select ‘Create Custom Catalog and Save Updates’ and continue.


Provide a path as to where to store your repository, catalog, and updates and be sure to select to generate the ‘Full Repository’ as we will need both the catalog.xml file and the updates themselves – and ‘Next’.

Once completed the job gets submitted into the Job Queue and can take quite some time as it is pulling down all of the updates.  You can monitor this by browsing the queue at the bottom of the screen.  When it’s all said and done you should see a number of folders and the catalog.xml file in your specified location   Just a note here, if you don’t see Catalog.xml I’ve had a few instances where I needed to re-run this process, selecting only to export the catalog file – then, re-running the complete process again selecting the full repository – told you it was a hot mess!  Anyways, after you are done go ahead and setup a windows share somewhere on this system – doesn’t matter where it is, so long as you can browse to this folder using it.

Setup the CMC

At this point we are through with the DRM and need to go back to our CMC in order to create our network share.  This is done in the same location that we exported our inventory (Server Overview->Update), selecting ‘Update from Network Share’ as our Update Type and ‘Edit’ to setup our connection to our newly created CIFS share.


Enter in the information that pertains to your share using ‘CIFS’ as the protocol.  You will need the IP address of your DRM server, the share name that you have setup, any further directories underneath the share if applicable, the name of the catalog (always Catalog.xml unless you specified otherwise) as well as the proper domain and credentials to connect.  To test your connection to the server select ‘Apply’ and then ‘Test Network Connection’  Once successful click ‘Back to return to our update screen.


At this point we should be able to simply click ‘Check for Updates’ and have the CMC query our DRM for any updates available and display them if so.


And voila – you can now select whether you would like to reboot now to apply updates or wait until the next reboot and kick things off by clicking ‘Update’!  Easy peasy right?  Not really – but at least this should help save some time….

A few troubleshooting tips to watch for

No Server is Ready for Update message

If this is displayed next to your Network Share information than the first thing I would check is the version of your iDRAC on your blades.  In order to update from a network share your iDRAC must be at version 1.5 or higher, so if you are lower update it!  As for how to do that, the easiest way I’ve found to do so on a blade running ESXi is to enter the individual iDRAC web gui for a given blade, and browse to the Update section under iDRAC Settings.  This will look for a file and it’s always a crap shoot as to where that file is or which package to download depending on the current version of iDRAC you are on.  Since this most likely an older version of iDRAC (below 1.5) then you will most likely need the .d7 file.  Download the EXE for your server labeled iDRAC with Lifecycle controller and extract the files within it to a folder – inside this folder you should see a payload directory.  The file within that (firmimg.d7) is the file you will need to upload in order to update your iDRAC.  After updating there will be a brief iDRAC outage as it reloads – when back up try ‘Checking for updates’ again on the CMC and it should now work.

Cannot check for updates message

This message is displayed when there is no catalog.xml file located in your exported CIFS repository.  Check to see if it is there – if it isn’t, as mentioned earlier, re-run the Create Deployment Tools process and point to the same location, selecting Catalog file only.  Once that has completed start the Create Deployment Toools process again pointing to the same location, selecting Full Repository.  Check to make sure the timestamp on your catalog.xml file is updated.

Caution icon next to repository progress

This generally means that you have some updates that require confirmation to download.  Simply double click the job in the job queue, click ‘Confirmation Needed’, and click ‘Accept’.

Any other possible issue and error

Call Dell Smile

Resizing the root partition of the vCenter Server Appliance (VCSA)

There are many side effects of a root file system filling up – server halts, unexpected application crashes, slowness, midnight wake up calls, etc.   And the root file system on the VCSA is no exception – in fact, I found it while trying to deploy a VM from a template into my environment – kept getting the dreaded 503 error that stated nothing useful to help with the resolution!  But, after a little bit of investigative work it appeared to me that the root file system on my VCSA was nearly full!  No keep in mind this was in my lab, and in all honesty you should probably investigate just why your file system is taking up so much space in the first place – but do to my impatience in getting my template deployed I decided to simply just grant a little more space to the root partition so it had a little room to breathe!  And below is the process I followed – may be right, may be wrong – but it worked!


Step 1 – Make the disk bigger through the vSphere Client!

This is a no-brainer – if we don’t expand the space on the disk belonging to the VCSA that hosts the root partition before we can expand the root partition into that space!  So go ahead and log in to vCenter (or better yet the host on which your VCSA runs) and expand it’s underlying disk


Once you have done this you may need to reboot your VCSA in order to get the newly expanded disk to show as expanded – I for one couldn’t find any solution that would rescan the disk within the VCSA to show the new space, but if you know, by all means let me know in the comments!!!

Step 2 – Rewrite the partition table

Things are about to get dicey here!  We are going to use fdisk in order to recreate the partition tables for the root filesystem – so relax, be careful and take your time!!!

First up, let’s have a look at our disk by running “fdisk –l /dev/sda”  As shown below we can see that it is no reporting at 25GB in size.


Next, we need to find the partition that our root filesystem resides on.  The picture of the “df-h” output at the beginning of this post confirms we are running on /dev/sda3 – this is the filesystem we will be working with…

So listed below is a slew of fdisk commands and options that we need to run – also, you can see my complete output shown at below….

First up, delete partition number 3 using the d option.

fdisk /dev/sda
d (for delete)
3 (for partition 3)

Now, let’s recreate the same partition with a new last sector – thankfully we don’t have to figure this out and should be fine utilizing all the defaults that fdisk provides…this time selecting the n option, p for partition, 3 for our partition number and accepting all of the defaults

n (for new)
p (for partition)
3 (for partition number 3)

After accepting all the defaults we need to make this partition bootable – again done inside of fdisk by using ‘a’ and then ‘3’ for our partition number

a (to toggle bootable flag)
3 (for partition number 3)


As you can see in the message pictured above we need to perform a reboot in order for these newly created partition tables to take affect – so go ahead and reboot the VCSA.

Step 3 – Extend the filesystem

Well, the hard part is over and all we have left to do is resize the filesystem.  This is a relatively easy step executed using the resize2fs command shown below

resize2fs /dev/sda3

After this has complete a simple “df –h” should show that we now have the newly added space inside our root partition.


There may be other and better ways of doing this but this is the way I’ve chosen to go – honestly, it worked for me and I could now deploy my template so I’m happy!  Anytime you are using fdisk be very careful to not “mess” things up – take one of those VMware snapshotty thingies before cowboying around Smile  Thanks for reading!

Friday Shorts – #VMUG, nmcli, All flash VSAN, Altaro and more…

Why hello there – it’s been a while – It’s been a busy couple of months with work, conferences and home life and blogging has been put on the back burner for a bit.  I mean hey, I live in Canada and I need to get ready for the winter eh!  It’s a “Game of Thrones” winter around here!  Fear not over the past couple of months I’ve been doing some awesome things with Ravello, with a vSphere 6 upgrade, and some other awesome automation and orchestration stuff so I have a lot of posts filed under the idea category – so there is no lack of content to be written!  All that said for now let’s just have a look at some great community posts.

More advantage to the VMUG advantage

vmugVMUG Advantage has many benefits including free NFR software evals, discounted training, certification, and conference fees, discount codes for software and labs and more – but now we can add one more item to that list.  As of now VMUG is offering $600 of service credit with vCloud Air OnDemand.  I’ve reviewed vCloud Air OnDemand and can say that $600 is more than enough to get you in there and playing around for the year!  This is yet another great benefit to the VMUG Advantage program so if you haven’t bought it – do it!

Unexpected Signal: 11

VMware LogoDid you jump to get vSphere 5.5 Update 3 installed and running in your environment?  If so you might want to check out this VMware KB which outlines that the snapshot consolidation process may cause your VMs to fail with the above, well descripted, error message Smile  Sorry, nothing funny about if you are running any backup solution that may utilizing the VADP to free up disks for processing!   Anyways, downgrade, power off VMs and consolidate, or redeploy 5.5 are your resolution options for now!

Linux Networking through vRO

vmware-vcenter-orchestrator-vco-logo-150x150If you love vRO and automation and you don’t follow the vCOTeam blog then you should, do that first before continuing any further.  There, now that that’s out of the way have a look at this very detailed post in regards to configuring networking with Linux using nmcli, or better yet doing the whole thing through a vRO workflow – Awesome stuff!

All Flash VSAN in the homelab

tier-whatJason Langer (@jaslanger) has a great article about spinning (err flashing) up an All Flash VSAN setup in his homelab – showing you both the hard and the easy way this is a great guide for those looking to test out AF VSAN in their spare time (you know, when you aren’t building lego and what not Smile)

Rubrik and vRealize Orchestrator

rubrik_press_bg1Well, if you are a Rubrik customer and you are a vRO lover then I suggest you head over to Eric Shanks’ blog as he (and Nick Colyer) has a slew of blog posts related to vRO and Rubrik and how to do just about anything utilizing the API’s that Rubrik provides.

Speaking of backup – Altaro is now on the scene

altaro-vm-backup-500x257There’s a new player in the backup space when looking at protecting VMware virtual machines!  I had a chance to sit on the beta for the Altaro VMware backup and albeit I didn’t have a lot of time to check it all out I did get it installed and configured some backups and liked what I saw!  There have been a lot of community reviews of their software and first impressions are very positive – anyways, all the data protection junkies can check them out here.

Ravello Systems – Inception without the kick!

Ravello-Systems-LogoIf you have at all visited this blog in the last 4 or so months you shouldn’t be surprised to hear that I’m a pretty big Ravello Systems fan!  I was part of their beta for nested ESXi and I’ve wrote about my thoughts on that plenty of times.  With the beta out of the way and access granted to all the vExperts, Ravello Systems took hold of the clicker at VFD5 in Boston for their first of what I hope is many Tech Field Day presentations.

Disclaimer: As a Virtualization Field Day 5 delegate all of my flight, travel, accommodations, eats, and drinks are paid for. However I did not receive any compensation nor am I required to write anything in regards to the event or the sponsors. I  have also been granted early access to Ravello Systems ESXi beta in the past as well, and have received free access as a vExpert.  All that said, this is done at my own discretion.

As I mentioned earlier I’ve written plenty about what I’ve done utilizing Ravello Systems.  The platform is great for configuration validations, home lab type stuff, and for just exploring different functionality within vSphere.   You know, user type stuff.  At VFD5 Ravello went pretty deep in regards to how their software functions within Google and AWS, so I thought I’d take a different approach and try and dive a little deeper into how their technology functions this time around…to the point that my brain started to hurt.

HVX – A hypervisor that runs hypervisors, designed to run on a hypervisor – huh?!?!

Ravello’s magic sauce, HVX is custom built from the ground up to be a high performance hypervisor to run applications (and other hypervisors) while in itself running on a hypervisor (in public cloud).  To say Ravello would know a thing or two about developing a hypervisor would be a major understatement – Ravello’s co-founders, Benny Schnaider and Rami Tamir were once the co-founders of another start-up called Qumranet.  You know, the same Qumranet that originally authored this little known thing called the Kernel-based Virtual Machine, or better known as….yeah, KVM.  So needless to say that have a little experience in the hypervisor world.

The first dream within a dream

As we know Amazon’s EC2 is essentially an instance of Xen, where-as Google’s Cloud utilizes KVM.  So when we publish our application inside of Ravello we essentially deploy an instance of HVX, installed within a VM that has been spun up on either Xen or KVM – once our HVX hypervisor has been instantiated on our cloud hypervisor, then our images or VMs within Ravello are deployed on top of HVX.  So even without yet touching ESXi within Ravello we are 2 levels deep!  Now in terms of a native ESXi deployment we know that we can take advantage of common virtualization extensions such as Intel-VT and AMD SVM, however in HVX, since we have already been abstracted away from the hardware by the Cloud hypervisor we don’t have these – instead, HVX implements a technology called Binary Translation to translate any executable code from the guests that is deemed “unsafe”.  Coupling this with something called Direct Execution, which basically allows any code that need not be translated to run directly on the CPU.  Honestly, if you want to dive deeper into binary translation and direct execution Ravello has a great blog outlining it in a lot more detail than can fit into my maple syrup soiled, hockey statistic filled Canadian brain.  Aside from the performance features, HVX also passes up emulated hardware – the same hardware that we as VMware administrators are all used to – things like PVSCSI, VMXNet3, LSI, etc – this is all available to our guests running on top of HVX, even to our guests running on top of our ESXi guests on top of HVX – I know right!


So, what actually happens when we click that ‘publish’ button from within the Ravello interface is somewhat unique – we know we need to install HVX into our cloud VM but how many instances of HVX actually get deployed?  I’m not going to try and understand their algorithms around how they size their hypervisor but I’m just going to say it depends on the resource allocation on the VMs within your application.  You could end up with a single VM running on one instance of HVX or you could end up with 6 VMs running on 2 instances of HVX – however the deployment scenario plays out you can be ensured that it will in only be VMs belonging to that single application that get deployed on the HVX instances – no VMs from other peoples applications, not even any VMs from other applications that you may have.

That networking though!

Perhaps one of Ravello’s major strong points is how it exposes a complete L2 network to the applications running on top of it!  By that I mean we have access to everything L2 provides, we have services available such as VLANS, broadcasting, multicasting, etc within the overlay network Ravello implements.  As we mentioned before, depending on the size of the application being deployed, we may or may not have multiple instances of HVX instantiated within the cloud provider.  If we are limited to a single HVX instance, then the networking is “simple” in terms that it doesn’t have to leave their hypervisor – all switching, routing, etc can be performed within the one HVX instance.  However when an application spans multiple HVX instances creative technologies come into play as shown below.  Ravello has essentially built their own distributed virtual switching mechanism which can tunnel the traffic between HVX instances or Cloud VMs via UDP connectivity.


And storage…

The last challenge as it pertains to running Ravello applications inside the cloud comes in terms of storage performance.  Having HVX slotted in-between the the running applications and AWS allows Ravello to take advantages of the object storage capabilities of S3, but yet still present the underlying storage to the VMs as a block device.  Essentially, when we import a VM into Ravello Systems, it’s stored in it’s native format on top of HVX and that appears to be a block device, but under the covers the HVX file system is essentially storing this information in object storage.  Aside from all this abstraction HVX implements a Copy-on-write file system, delaying the actual allocation of storage until it is absolutely needed – in then end we are left with the ability to take very fast snapshots of our images and applications we deploy, easily duplicating environments and allowing people like myself to “frequently mess things up Smile


The Ravello presentation at VFD5 was one of my favorites from a technology standpoint – they did a great job outlining just what it is they do, how they do it, and how they are chosing to deliver their solution.  There was some questions around performance that were met head on with a whiteboard and overall it was a great couple hours.  Certainly check out some of the other great community posts below centered around Ravello to get some more nested goodness..

Ravello has a great product which honestly completely blows my mind when I try and wrap my head around it – We have our VMs, running on ESXi, running on HVX, running on Xen, running on some piece of physical hardware inside an Amazon data center – attaching to both Amazon EBS and S3 – we are snapshotting these things, saving as blueprints, redeploying to Google Clouds which completely flip the underlying storage and hypervisor!!  It’s exporting VMs out from our current vSphere environments and deploying them into the public cloud, complete with all of their underlying networking – already setup for you!  Ravello has coined their nested virtualization capabilities as Inception, and if you have ever seen the movie I’d say it certainly lives up to the name.  It has this magic about it – where you are in so deep yet still in control.  If you have a chance check out their VFD5 videos and sign up for a free trial to check them out for yourself.

VMTurbo – allowing smart people to do smart things

VMTurboLogoSmLet’s face it, our environments now are way more complex than they were 10 years ago!  Although some tasks and components may be easier to work with and not quite as specialized, we have a lot of them – and they all need to work, in perfect harmony, together.  The problem with this is at times we get a couple members of the choir that get a little out of key – CPU starts screaming, network gets chatty and next thing you know we have an environment that’s screaming out of control, CPU start shoving network, network starts over drowning memory and to be quite honest, pretty much everyone in the choir at this point sounds like $@#!.

Although this scenario may sound a little far-fetched or a wee bit out there – I mean, CPU can’t sing we all know that!  Either way you put it any choir needs a conductor, a leader, someone who overlooks the complete environment, instructing certain members to gear down, and others to ramp up – Last month in Boston at VFD5, VMTurbo showed us just how they can wave the baton when it comes to bringing together the components of enterprise IT.

Disclaimer: As a Virtualization Field Day 5 delegate all of my flight, travel, accommodations, eats, and drinks are paid for. However I do not receive any compensation nor am I required to write anything in regards to the event or the sponsors. This is done at my own discretion.

Having just saw VMTurbo at #VFD4 in Austin only 6 months prior I was skeptical as to what they would have to talk about in Boston, thinking it was mainly going to be the same presentation – I was wrong!  They could of easily filled another 4 hours talking about the new features that they have embedded into the new release of their flagship product, Operations Manager 5.2.


Traditionally VMTurbo has gathered its’ market intelligence by polling and retrieving statistics and metrics from the hypervisor – while this is a good technique and is used by many monitoring solutions today there are some applications which don’t necessarily work well in this scenario.  Applications which look after their own resource usage – think SQL, Java Heaps, etc. – these applications may not quite properly reflect their true usages in the hypervisor layer.  For this reason VMTurbo has released an Application Control Module (ACM), which completely integrates into their entire supply/demand model of monitoring.  To help put it into perspective let’s have a look at SQL – ACM essentially brings in statistics around transactions, response time, database memory usage, etc – All items which are not available within the hypervisor itself.


From here, VMTurbo users are able to define QoS policies, or SLA’s around their applications performance.  Think I need x number of milliseconds for response time.  VMTurbo then looks holistically at your environment – it knows about the infrastructure underneath the app, what resources are available – it now knows about how that application is configured, memory management etc.  With all of this knowledge VMTurbo can then configure your environment and your application to a desired state, one that we know we are running efficiently, while meeting those SLA’s and QoS policies we have setup in regards to the application!


imagesAside from applications VMTurbo has been busy with a few other cool benefits as well!  With the adoption of public and hybrid cloud on the rise, they’ve seen a need to introduce a lot of enhancements in terms of networking – for example, knowing the physical location of applications is key in terms of best placing “chatty” applications close to each other in order to reduce latency, while still maintaining their “desired state” in terms of both CPU, Memory and Storage as well.  They do this by grouping chatty applications together in what they call a vPOD.  From there OM implements Netflow in order to discover your physical switching configuration, and can work to ensure that vPODs are grouped together on the same top of rack switch or the same public cloud region, etc, moving the entire vPOD if one application requires more resources.


Just as VMTurbo has made steps to get more information out of the application stack they are doing the same with storage!  By completely understanding the storage array underneath your infrastructure, OM is able take action on overcoming storage issues around capacity and performance.  Think of things such as knowing whether to expand a current volume or deploy a new one!  OM understands almost everything there is to know about your infrastructure and applications, and can therefor make the best decision on how to meet the SLA’s defined on those applications from a storage standpoint – one time it may make sense to simply grow a volume, while other times due to other applications running on that same volume it may be more efficient to create a new volume and migrate the application in question.

VMTurbo has certainly taken a unique play on monitoring and resolving issues with your environment.  This whole economic market play – with supply/demand being applied to your infrastructure and applications, is different, but honestly makes sense when looking at resource utilization.  I like how Operations Manager has been built – this modular approach allows them to come out with new features such as the application and storage modules, and simply plug them into the product, where they just simply inherited into the supply chain module and analytics can immediately be applied to them.  And as of now you can do it all from your own cloud on AWS!

If you want to watch the VMTurbo videos yourself you can do so here – or check out my complete VFD5 page here.  Also, we have had some other great community posts around what VMTurbo spoke about – be sure to check out each of them below as each delegate seemed to write about a different part of their presentation…

Operations Manager can certainly do some amazing things, allowing you to automate things such as moving an application to the cloud based on it’s supply/demand analytics – which at first sounds a bit scary – but hey, it wasn’t that long ago that people were weary of enabling DRS right?!?