Monthly Archives: October 2011

Unable to delete an inactive datastore.

I ran into an issue where there was a datastore present in the datastore inventory view that was no longer in existance.  Deleting this datastore was not an option as after right clicking the delete option was disabled.  First things to check…do you have a virtual machine(s) with a disk(s) located on this datastore?  In my case I did…

So just delete or remove the disk from the vm right?, easy fix, well, not so much.  After trying to simply remove this disk from the VM in question I got a "Invalid Configuration for device 0" error.  Basically, it's trying to delete or remove a disk residing on a datastore that no longer exists, thus it's having issues when trying to access the disk (since it's not real anymore).  The way I got around this was to take note of a couple of items.  The first being the datastore that contains the Virtual Machine Configuration (vmx)  file (Found Under Options Tab) and the second being the SCSI address of the Virtual Disk that you are trying to remove.

Now, in order to remove this disk from the VM we will have right click the vm and remove it from inventory.  Once this is done you will need to SSH into the host and navigate to the Virtual Machine configuration file location we noted earlier.  Once in the correct location we need to edit the vmx file for that vm.

vi NAME_OF_VM.vmx

Inside this file you should be able to find the three lines which define the disk that we cannot remove.  They should be in a form similar to below, replacing scsi1:0 with the scsi address noted previously.  

We need to comment out all of these lines by placing a '#' in front of them, then save the file.   When complete the lines should look as follows…

Now, from within the VI client you can browse to the datastore containing the vmx file, right click and add to inventory.  Or run the following from command line (just note, if you have vCenter running, use it.  You don't want mismatches between the host and vCenter, it just gets mucky)

vim-cmd solo/registervm /vmfs/volumes/datastore_name/VM_directory/VM_name.vmx

There, now we have removed the non-existant disk from the VM.  If your datastore that you were trying to delete had no other vms with non-existant disks pointing to it and if it was really invalid, you will probably notice that it has now….disappeared!  That's all! 🙂

Stopping Veeam Backup Jobs with Powershell… Remotely!

Why?  Why would you ever need to do this remotely?  And why with powershell?  Those are the questions that come to mind when I created this title, except its a bit biased because I already know the answer as to why I needed to do this.  Basically, I ran into a situation where I could not rdp into my Veeam backup server, nor could I get into the console.  I did however need to power this server off, but didn't want to do so as it was still running replication jobs (didn't want to leave snapshots hanging around).  Thus, remote powershell was the only option that I had in order to stop the replication jobs gracefully.

Hopefully you already have remote powershell setup before you get into this predicament, but if not, here's a good article on how to set it up, there is even some group policy stuff in there you can force remote powershell down.  

Ok, so now that you have remote powershell setup the process of stopping the replication or backup jobs is as follows.

1. Get into a remote session with your Veeam Server (Depending on if your Veeam server is a member of the domain or if you have trusted hosts setup you may need to append -Credential Username to the end of this line)

enter-pssession VEEAM_SERVER_NAME

2. Load the Veeam CMDLETS by running their initialize script.

cd C:\Program Files\Veeam\Backup and Replication
./Initialize-VeeamToolkit.ps1

3. Get a list of the running jobs. ( I tried get-vbrjob | where { $_.State -eq "Working" } but this didn't seem to want to return anything )

get-vbrjob

 

4. Assign the job you want to stop to a variable.  Jobs that are currenly running have a State value of Working

$job = get-vbrjob | where { $_.Name -eq "Veeam Job Name" }

5. Stop the job

stop-vbrjob $job

Just a note, you could combine steps 3, 4, and 5 by piping and doing the following but I tend to like to do things linear (its just in my nature :))

Get-VBRJOB | where {$_.name -eq "Veeam Job Name"} | StopVBRJob

 

Update 10/26/2011

I got a tip from @vPowerCli from vpowercli.wordpress.com on how to get the jobs that are currently running.  This is a great blog focusing solely on Veeam and Veeam CMDLETS.  I recommend checking it out.  Essentially you could run the command

Get-VBRJob | ?{$_.GetLastState() -eq "Working"} | Stop-VBRJob

This will loop through all the running jobs and stop them respectively.  For those that aren't 'linear'

When the job has canceled you will be returned to your prompt.  Just repeat steps 4 and 5 for every job you want to cancel.  If you want to see a list of all of the Veeam Backup and Replication commands just type Get-VBRCommands inside Veeam Powershell.  I couldn't get this command to work remotely, you have to do it on the Veeam server by going to Tools->Powershell.

The Resource Pool Priority Pie Paradox – Part 2 – The Formula!

Part 2 – The 4:1 Formula

In Part 1 of this series I wrote about what exactly the Resource Pool Priority Pie Paradox is and how the share mechanisms can really give you some unexpected results.  As in Part 1, the fix is really dependent on your type of environment and how your VMs are configured.  Basically, one thing to remember is that VMs with 2 vCPU's will get twice the amount of shares of those VMs that have 1 vCPU.  VMs with reservations and limits can also affect how shares are applied.   I don't think that the following formula is perfect, and I'm sure it will always be a work in progress, but for a basic environment with now limits or reservations this is a good formula to use to divvie up shares between a Production and Test Resource Pools at a 4:1 ratio (meaning VMs in the Production pool will get 4 times the amount of shares of those VMs in the Test Resource Pool.

I will go through this formula twice, once with a number that I know works out to a whole, and then with the numbers from one of my production clusters.  So, lets say we have a cluster with 1740 shares to divide up amongst our Production (High Shares) and our Test (Low Shares) Resource Pools.  If have a total 8 VMs, 7 in production and 1 in Test (assuming they all have 1 vCPU) our shares would be set as follows

Production (80% of the 1740 = 1392) Divide that by the 7 VMs and you are left with ~ 198 /VM

Test (20% of the 1740  = 348 ) Divide that by the 1 VM and you are left with 348/VM.

Yikes!  Hardly the outcome that we want  So in my previous post I mentioned that setting shares to custom and entering a value can help you get the results you want.  But how do we know what we want for that value.  How can we determine what will give us the 4:1 ratio of Production to Test.  Essentially, this number would have to be tweaked as well in order to accommodate for the addition of VMs to either of the resource pools.  Anyways, I have a formula that will do just that, essentially it goes like so…

a ( 4w – 3x ) = nx
y = w – x

Where

a = # of VMs in Production Resource Pool

w = Total # of Shares for Production and Test Resource Pools

n = Total # of VMs in Production and Test Resource Pools

x = This will be the custom share value to set on the Production Resource Pool.

y = Custom Share value to set on the Test Resource Pool

Neat eh?  I can't take full credit, I basically had to lay out the scenario to some of the math geeks online in order to come up with the formula!

With that said though it works, so lets pump in our values from above..

a = 7, w = 1740, and n = 8 – Now we will use the first line to solve x (custom shares for production pool)

a ( 4w – 3 x) = nx
7 ( 4(1740) – 3x ) = 8x
7 ( 6960 – 3x ) = 8x
48720 – 21x = 8x
48720 = 29x
​x = 48720/29
x = 1680

Now to get y

y = w -x
y = 1740 – 1680
y = 60

Now if we do the math, we will get the following

Production Pool ( 1680) Divide that by 5 VMs you get 240 / VM

Test Pool ( 60) Divide that by 1 VM and you get 60 / VM

There you have it, Production getting 4 times more than Test.  For fun, lets do it one more time with some larger numbers.

Lets say we have 67 VMs total, 60 in the Production Pool, 7 in the Test.  Our Total CPU Shares are 248000.  Again, let's assume we are dealing with 1 vCPU machines.  So, using the same formula we would get the following

a ( 4w – 3 x) = nx​
60 ( 4(248000) – 3x ) = 67x
60 ( 992000 – 3x ) = 67x
59520000 – 180x = 67x
59520000  = 247x
x = 59520000 / 247
x = ~240 971
y = w – x
y = 248000 = 240971
y = 7029

So we are left with Production ( 240 971) divided by the 60 VMs, roughly 4016 / VM and Test (7029) divided by the 7 VMs, roughly 1004 / VM.

Once again, Production has 4 times the shares as Test.  Pretty cool huh?  Now I know that this isn't very reflective of the real world (being all 1 vCPU machines) but in order to make it work with SMP VMs you would just need to get the total number of CPU's in production, and the total over all Number of CPUS and use those numbers instead of VM numbers and it should work.

Part 2 – The 4:1 Formula

The Resource Pool Priority Pie Paradox Part 1 – Small Piece of Big Pie or Big Piece of Small Pie???

Part 1 – Small Piece of Big Pie or Big Piece of Small Pie
Part 2 – The 4:1 Formula

Post VMworld is among us all and we all have our takeaways from the conference that we want to apply into our production environments at work.  One big one from me came from the Performance Best Practices and Troubleshooting (VSP3866).  This session was jam packed with all the best practices around monitoring, tuning, and troubleshooting vms and hosts with cpu, memory, storage, or networking issues.

Although a lot of information was covered in a short time, and I jotted down the many different scenarios and fixes that I wanted to apply to my own production cluster, however the biggest one that stuck out for me was something called The Resource Pool Priority-Pie Paradox.  Now, this is nothing new, it's been around for quite some time.  Craig Risinger has a great guest blog post on Duncan Epping's Yellow Bricks blog here dating back to February of 2010.  The main point of the article states that having many vms in a production pool with high shares, and few vms in a test pool with low shares could in some scenarios end up with your production vms receiving less cpu and memory than your test vms.

Although there have been many other blog posts about this subject it was something that I have never noticed or even thought of.  The main reason it has never affected our environment is that the resource pool shares will only kick in when contention occurs, and since in our environment we have the physical resources to support all of our vms, we have never had to see the shares mechanisms come into play.  However, if contention ever does occur, this would become a major issue.  It's best to read Duncan's post for a more in depth explanation of this, however, for my own learning, I decided to recreate this with a simple lab example.

I have created a cluster containing two resource pools (Production and Test).  Production has its' shares set to High, whereas Test has its shares set to low.  I've used 6 small VM's( 1 vcpu, 256 Mb Ram) for this example, laid out in a 5 to 1 ratio of Production to Test.  So, if the share mechanisms were to kick in, the Production resource pool would receive 80% off the resources to split amongst its' 5 VMs (16%/vm) and the Test pool would receive 20% of the resources to split amongst only 1 VM (20%/vm).  Looking at the 'Worst Case Scenario' column in the screenshots below you see that in fact, it's much better to be offered the big piece of small pie…

 

So, what is the answer?  I think I will take the easy way out and say it depends.  It depends on the amount of resources in your environment, it depends on the vm's that reside in your resource pools, and it depends on the limit's, reservations, and shares setup on your resource pools.  In this situation, simply setting the shares to custom and setting Production to 9500 and Test to 500 results in the following

As you can see, the Production VMs increased to 636/VM and the Test VM decreased to 169.  You can set the custom shares to whatever you need to in order to get your desired end 'Worst Case Scenario'.  In addition to this, you can also add some reservations and limits, however the main point to get across is that you need to do the math for your environment.  Remember, 2 vCPU VMs will get twice the shares as a 1 vCPU VM, which in turn will sway the numbers even more.  So, right-size your VMs, keep an eye on your 'Worst Case Scenario', and if all else fails, hook up with @DuncanYB or @FrankDenemen on twitter.  

Part 1 – Small Piece of Big Pie or Big Piece of Small Pie
Part 2 – The 4:1 Formula

vMotion with your finger!!! – New vSphere Client for IPAD

As if the magic of vMotion wasn't good enough, you can now do on your IPAD.  Thats right, you now have the ability to take a running production virtual machine and move it from one piece of hardware to another using nothing but your finger!  And to top it off, it has sound effects!  (Why do I feel like there was an over abundance of plane noises coming from the sessions at vmworld europe?)

Along with vMotion the new IPAD vSphere client added support for older (3.5) and new (5) versions of vSphere and some other bug fixes and features.  So, if you haven't upgraded you might as well.  It's available in the app store now.  Keep in mind, you will have to upgrade your VCMA to 1.2 as well, which is available on VMwares' fling pages here.  It's a pretty painless and quick task.

On a serious note it is nice to see more administrative type tasks coming into to client built for tablet use.  The need for lugging laptops around is getting smaller and smaller with every release!

 

Use PowerCli to shutdown VM’s and Hosts when running on battery.

UPDATE – I’ve had the unfortunate chance of testing this script a few times and have found that its not as efficient as it could be.   If I were you I would check out my updated post Practise makes perfect! More PowerCLI APC Powerchute Network Shutdown Goodness (Now with Power On!) as it completes much faster and has a function to turn the VMs back on once power is restored.  Thanks!

A few months ago, in order to prepare for the release of ESXi and vSphere 5 I went through the motions of migrating all of our ESX hosts to ESXi. I thought I had all my ducks in a row concerning all of our third party solutions that connect to our environment (Veeam, VCops, etc) but one that completely slipped my mind was our APC and vGhetto shutdown scripts that were running inside our vMA. Since our then current solution relied on the ability to ssh into the hosts and ESXi by default doesn’t have an ssh server running, the scripts proved to be somewhat, how shall I say this….useless (and yes, I found out the hard way!!). When looking for alternatives I first went to APC and was delighted to see that they finally had a version of their powerchute network shutdown available for the vMA appliance. What I found out though was that it basically peeled through your list of servers that you have connected to the vMA and shut them down accordingly, which in my case is not the answer. I have multiple ESX(i) instances connected to the vMA which reside in many different locations, so if the power failed at our main datacentre, essentially our vMA would receive the shutdown command and initiate a shutdown sequence on ESX hosts in an offsite location. So, back to the drawing board as that was definitely not the answer. I liked the way that lam??? Ghetto scripts worked originally being that you could specify which hosts and even which vMA to shut down So i decided to recreate, or at least get close too that same functionality in powershell. Below is what I came up with, and the script is very much in its infancy so if you can see some spots for improvement please let me know! Also, I don’t claim to be a powershell expert in any sort of sense so definitely don’t take this directly not production, test it in a lab first or even in comment the shutdown lines and have a look at the output. I placed the script on a windows box outside of our virtual infrastructure, so essentially the physical machine receives the shutdown command, but before shutting itself down it runs the powershell script to gracefully shutdown the virtual infrastructure (vms and hosts) first.

Anyways, here it is and if you have any suggestions , comments or questions don’t hesitate to ask. We can muddle through this together 🙂

#################################################################################
# Shutdown VMs and Hosts
#
# This script will loop through a list of ESXi hosts and initiate shutdown
# commands to the vm’s residing on them. If VMware Tools is installed, the
# script will attempt to do a graceful shutdown. If VMware tools is not
# installed a hard power off will be issued. One note, if you have any VMs that
# you would like to remain on until the end of the process be sure to put their
# names in the $vmstoleaveon variable and also be sure they reside on the last
# host listed in the $hoststoprocess variable.
#
# i.e If I wanted to have VM1 and VM2 stay on till the end I would have to be
# sure that they reside on esxi-03 and my variables would be setup as follows
#
# $vmstoleaveon = “VM1 VM2”
# $listofhosts = “esxi-01 esxi-02 esxi-03”
#
# Created By: Mike Preston, 2011
# Updates: MWP – March 2012 – how not looks to see if any VMs are still on after
#                doing initial shutdown, then runs a hard stop, and enters
#                maintenance mode before shutting down.

 

#
#################################################################################
# list of hosts to process
$listofhosts = “esxi-01”, “esxi-02”, “esxi-03”
#list of vm’s to ‘go down with the ship’ – vms must reside on last host in above list.
$vmstoleave = “VM1”, “VM2”
#loop through each host
Foreach ($esxhost in $listofhosts)
{
$currentesxhost = get-vmhost $esxhost
Write-Host “Processing $currentesxhost”
#loop through each vm on host
Foreach ($VM in ($currentesxhost | Get-VM | where { $_.PowerState -eq “PoweredOn” }))
{
Write-Host “====================================================================”
Write-Host “Processing $vm”
# if this is a vm that is supposed to go down with the ship.
if ($vmstoleave -contains $vm)
{
Write-Host “I am $vm – I will go down with the ship”
}
else
{
Write-Host “Checking VMware Tools….”
$vminfo = get-view -Id $vm.ID
# If we have VMware tools installed
if ($vminfo.config.Tools.ToolsVersion -eq 0)
{
Write-Host “$vm doesn’t have vmware tools installed, hard power this one”
# Hard Power Off
Stop-VM $vm -confirm:$false
}
else
{
write-host “I will attempt to shutdown $vm”
# Power off gracefully
$vmshutdown = $vm | shutdown-VMGuest -Confirm:$false
}
}
Write-Host “====================================================================”
}
    Write-Host “Initiating host shutdown in 40 seconds”
    sleep 40
    #look for other vm’s still powered on.
    Foreach ($VM in ($currentesxhost | Get-VM | where { $_.PowerState -eq “PoweredOn” }))
    {
        Write-Host “====================================================================”
        Write-Host “Processing $vm”
        Stop-VM $vm -confirm:$false
        Write-Host “====================================================================”
    }
    #Shut down the host
    Sleep 20
    Set-VMhost -VMhost $currentesxHost -State Maintenance
    Sleep 15
    $currentesxhost | Foreach {Get-View $_.ID} | Foreach {$_.ShutdownHost_Task($TRUE)}
}
Write-Host “Shutdown Complete”

 

VMware View on the $99 HP Touchpad

Last month, being the quintessential geek that I am, I spent the greater part of one of my weekends installing the Ubuntu Chroot on the HP Touchpad and blogging about it.  Since then I got thinking, why couldn't I just grab the VMware View Open Client and install that into Ubuntu and essentially run a View desktop on my Touchpad, which brings us to this post.  Although its not running directly on webOS its still kind of cool and thought it deserved a few words…but, since a picture is worth a thousand words I'll show you the proof first.

There you have it, a fully functional Windows 7 desktop from VMware View running on the Ubuntu Chroot on the HP Touchpad.  Now, before getting too excited please remember that obviously none of this is supported with VMware or HP.  I did it just becuase I wanted to see if I could.  Mind you I couldn't get the VMware View open client to compile on the Ubuntu Chroot.  It kept complaining about the Arm processor and not having 'Thumb' support.  So, if anyone knows how to do recompile the open client on an arm processor please leave a comment below and let me know.  The packages that I installed to recompile the open view client were libgtk2.0-0 libgtk2.0-dev libglib2.0-0 libglib2.0-dev libxml2 libxml2-dev libcurl4-openssl-dev intltool libboost-dev boost libboost-all-dev build-essential.  I used the Open View client found here and followed the instructions to recompile here.  

But like I said, I had no luck with the recompile so I had to cheat somewhat in getting the client up and running.  So, if you haven't installed the Ubuntu Chroot, do so by following whatever set of instructions you want to do that, mine are here.  Basically, I just downloaded the HP ThinPro add-on package of their ThinPro software, extracted out the View client Debian package, copied it to the Touchpad and simply ran the following command.

dpkg -i hptc-view-client_4.6.0-1_armel.deb

I can't remember if there were any dependencies that I needed to get, you may have to remove your current rdesktop install first.  A message will be displayed if you need to get and/or remove anything, if so, just use the sudo apt-get install/remove commands to get whatever you need.   Anyways, after installing, simply drop to a terminal from within Ubuntu and run vmware-view and your client should fire right up.

To tell you the truth, the performance is really not that bad.  I thought that it may have some sort of downfalls running on a Chrooted Ubuntu along side with webOS, but it was actually pretty snappy.  

Again, this is definitely not supported, and to tell you the truth, it will drain your Touchpad battery very very fast, so it probably isn't very practical either, but just like the Ubuntu Install, it was fun and I thought I would share….

Cannot Power On virtual machine – Invalid Signature in change tracking file

So, I've run into this issue a few times now, and since a google search of the error message returns basically nothing worthwhile (except for a VMware KB Article with basic CBT info) I thought I would write up the solution that I used in order to get this VM back up and running. Let me say thought before I get going, that if you have run into the same or similar situation, there is no guarantee that this method will work. it worked great in my environment, but there are many different issues that could cause this error, and this is only the fix for the one VM in my environment. Also be sure to open up a support ticket with VMware support as the last thing you want to do is cause further corruption to an already corrupted production virtual machine.

So first off the error message. "Invalid Signature in Change Tracking File". Basically, it seemed that on a snapshot creation, or consolidation, I'm not sure, one of my VMs would completely power off, and in attempts to power it back on I received this error. Immediately I responded by attempting to disable CBT, both on the VM and by commenting out the CBT lines in the vmx, however this did not resolve my situation. The error still occurred and the VM would not power on.

So essentially this points to some sort of corruption or issue within the change tracking file itself. The change tracking file (VM-NAME-ctk.vmdk) did exists in the VMs directory, so I moved on the Virtual disk descriptor file (VM-NAME.vmdk). If you edit this file you can see a line that defines the change tracking file as shown below. This is the line needs to be removed or commented out in order to continue. I personally like to make a backup of any file before I edit it, and would recommend doing that.

So look for a line that reads 'changeTrackPath="VMNAME-ctk.vmdk"' and comment that out. This is done by simply placing a # sign in front of the line. Once you are done that hit ESC and :wq to save and quit. Now we need to clone the disk (or disks if you have multiple) to another folder. This way we can play with the new disks and not effect the old (and if you are like me) untrusted disks that were generating the error. The command to use to clone the disk from ESXi console is..

vmkfstools -i /path/to/vm/VMNAME.vmdk /path/to/newvm/VMNAME.vmdk

Depending on the size and number of your virtual disks this can take a while.

Now it is time to go and remove the old vmdks from our VM and attach our newly created cloned disks. This is done through the vSphere client and is pretty straight forward but I've included some screenshots as a reminder.

First, right click the VM -> Edit Settings and we want to remove the original hard drives from the virtual machine. Select the Hard Disk that you wish to remove and click the 'Remove' button in the top left. Be sure to chose 'Remove from Virtual Machine' from the next screen. You might want to leave the original disks there and not Delete from disk, especially if VMware support is involved

 

 

Once you have removed all of your disks, click OK in order to apply all of your changes. Once it's gone through, Right Click the VM -> Edit Settings. Now we need to add our newly created and cloned vmdk's back to this VM. Click 'Add' and select Hard Disk as the Device Type to add. Be sure you add your disks back to the vm in the same order that they were in when you removed them.

 

 

 

When at the 'Select a disk' section, make sure you select 'Use an existing virtual disk' and not create a new one. Browse to the location of your cloned disks and select them, when done click 'OK' to apply everything. You should now be able to power on your vm (at least i was). Once you do so, a new change block tracking file will be created. Also, you probably should storage vMotion this VM somewhere else as well just to clean up all of the folder names as well. Hope this helps someone out, but again, be sure to contact VMware support if you have it!

Installing VMware Tools on Debian 6 (squeeze)

I was in the process of spinning up a few Debian 6 squeeze servers today when I ran into a little bit of trouble installing VMware tools inside the guest.  These were bare bone installs, containing basically just the system utilities from the tasksel menu (no desktop gui environment).  I proceeded to install VMware tools the way I normally do and have for previous versions of Debian. Right-Click VM -> Guest -> Install/Update VMware Tools when I ran into the following error during the install.

'The path "/usr/bin/gcc" is not valid path to the gcc binary.

It was at this point I when and pulled down the build-essential package, thinking maybe gcc wasn't installed, and I knew that I would need 'make' as well.

After an apt-get install build-essential I received the same error when trying to re-install.  The path /usr/bin/gcc was certain valid, and it most certainly pointed to the the gcc package.  Basically, what I found out is that when you do the base install with Debian 6 Squeeze (through a net-install image) you do not receive the header files for the kernel you are running.  These files are actually needed for VMware tools to recompile and configure itself for the version you are running.  So, an easy fix…

apt-get install linux-headers-$(uname -r)

After running this command, and then re installing your VMware tools again you should see the following during your installation process.

"Detected GCC binary at "/usr/bin/gcc-4.3".

Just accept the default here to not change the path, complete the rest of the VMware tools install and bobs your uncle.

HA Error: Unable to contact a primary HA agent.

Ok, here's a quick one of an issue and resolution (sort of) that I ran into today.  Whether or not I actually fixed the error and whether or not the error will come back only time will tell.

Basically, I was running some updates on a few hosts today, taking them in and out of maintenance mode when the following error occurred.

"Cannot complete the configuration of the HA agent on the host.  Unable to contact a primary HA agent."  

Sounds kind of scary at the start.  I had no clue what was causing this and was a little worried that maybe through some sort of magic that a host may be somehow declared isolated and start to restart vm's.  Basically, I tried to disable and re-enable HA, however the process seemed to be taking a long time and always getting stuck at 72%.  I restarted the vCenter server service in order to cancel the tasks, then tried re-enabling HA again…same slowness, same 72% hang up.  After a few attempts I finally decided to wait to see if it was really hung up.  After a certain amount of time, not sure how long it was, but it was quite a while, the HA configuration failed and moved on to the next host…which in turn took the same amount of time and failed.  It wasn't until I allowed this process to continue and fail on every host that things started to work as expected.  After it had timed out itself, it was just a matter of disabling HA, letting that process finish, and then re-enabling HA and letting that process finish that all of my HA errors had cleared.  I'm peachy now!  Moral of the story, don't mess with the HA tasks, even if they seem to be taking forever, just let them be and time out by themselves 🙂  Also, note to self – Buy Duncan and Franks book – I think I need it!!

Toronto vMUG – Sept 27 Recap

Although I've had plans to go to the last 3 or 4 Toronto VMUG events something has always came up which has put the 2 hour drive on the back burner.  The only one that I have made it out to was the regional VMUG which I thought was more like a mini-vmworld in a sense.  I did however make it out to the latest half day event at the Metro Toronto Convention Centre on September 27th and I'm glad I did.  The half day events seem to be more topic oriented, with all vendors and speakers focusing more on one functionality or product of the the VMware Cloud Infrastructure Suite.  The smaller crowds give you more of a chance to talk and network with your peers in a little more intimate of a setting.  With that said, heres what went down.

The day started off with Angelo Luciani (the VMUG leader) who just did a short presentation regarding the days agenda, brief overview of upcoming meetings, the VMUG advantage program and the ways in which to stay connected in between VMUG meetings.  Those included twitter (follow @torontovmug), the Toronto VMUG LinkedIn Group, and the new blog (www.tovmug.com).
 
Mike from VMware was next up on the list and went through a lot of the announcements that were made at VMworld, even to the finite detail of taking interesting factoids out of Paul Maritz' keynote presentation regarding the ratio of VMs to babies and vmotions to aircraft.  Mike also took us through the releases of ESX/ESXi from 1.0 all the way up to 5.0 and explained most of the new features and performance gains from 5.0.  In addition to this, he provided a brief explanation of all the products that are making up the new VMware Cloud Infrastructure Suite.
 

(VMware's Presentation)

Next up was one of the sponsors, Xsigo.  I can't remember the gentleman's name that was presenting but it was a very good presentation nonetheless.  Xsigo talked about how the can streamline I/O management issues with their product called I/O Director.  Basically by virtualizing the network and storage I/O with there appliances you can, in essence, provide up to 80GB worth of bandwidth to EACH of your servers.  Will you ever need that?  Who knows, I know I don't…right now, but hey, we've all seen the old computer advertisements stating you would never need more than 64K of ram, so anything is possible.  Xsigo looks like they have an interesting product line, but I'm not sure if it is something you would consider just adding to your datacentre.  I would definitely consider it upon building a new datacentre, but the investment in the HBA's, Brocades, and Network switches on the back of our blade chassis now is a little too much to just rip out and throw away.

 
 
Next was the guest speaker, Scott Lowe.  This was the highlight of the day for myself, and really, the driving factor as to why I made the trip up there.  Scott is an active blogger and is constantly contributing content and dedicating his time to the community.  I would definitly recommend checking out Scott's blog over at blog.scottlowe.org and following him on twitter.  Scott made the long haul down from Denver to Toronto to speak about stretched clusters in his Elastic vSphere presentation.  Scott spoke about all of the different types of stretched clusters you can build along with all of the different design consideration and 'gotcha's that you need to consider.  Scott took us from start to finish with all the components of a stretched cluster that need to be addressed including storage, HA, DRS, Storage DRS,  networking, and operations.  He then gave his personal speculation of what he hopes will be included in future releases of products to help customers to more easily deploy these types of cluster and to avoid the common problems that arise from them.  Personally, I've never had the need nor the dollars to build a stretched cluster, but the overall concept of 'Disaster Avoidance' is appealing to me.  The ability to command your entire datacenter workload to vmotion over to another datacenter without disruption to service is very cool and somewhat futuristic.  Scott was a vibrant speaker and had the audience engaged in ways that the other speakers did not.  Not once did Scott mention any specific vendors and/or products and in my opinion his passion for the technology gleamed throughout the entire presentation and it felt as if it was just another customer delivering the presentation.  Well done.
 
 
F5 took the stage next to speak about their BIG-IP architecture and how they can improve performance within VMs sitting behind there appliances, as well as conserve bandwidth to and from the clients accessing them.  They spoke about their plugins and integration into vCenter and how they can implement some automation in provisioning VMs on demand to respond to changes or increases in traffic volume.  They went through there model of a hybrid cloud and how their products fit into this design, as well as some of the SSO and wan acceleration features they provide.  They also touched a bit on how they can help accelerate View connection starts and restarts.  I didn't find this presentation as helpful as the rest of them, however, I'm not much of a networking/bandwidth/traffic guy.  I'm sure some of the networking guru's that attended were plenty interested, but it was almost foreign to me. 
 

Cisco closed the day talking about their UCS platforms and vSphere 5.  They explained very briefly on how they became a member of the top 3 blade computing companies within a very short time in the market.  The items that caught my attention from there presentation the most was how between using Cisco UCS Service Profiles, VMware Auto Deploy, and vCenter Host Profiles, you can essentially spin up hosts with little to and sometimes no manual interaction aside from pushing a power button.  And this is done in a very short period of time as well, perfect for the dynamic workloads a cloud environment require.  They also spoke briefly about VXLAN and the advantages that we will see once it is certified, as well as about their solution to migrate workloads between datacenters using the Nexus 1000V.  It was yet again another great discussion which spawned up many questions around the networking end of things and how to get your network guys involved with your virtualization initiatives.

In short, it was a great day jam packed with some great content (and a few great giveaways).  I want to thank Scott Lowe for making the trek up to the great white north and dedicating his time an knowledge to this event.  For those who are not able to attend a VMworld or a regional VMUG it really gives them the opportunity to meet, see, and learn from the 'virtualization rock stars' and bloggers.  Also I want to thank Angelo (and all the other VMUG leaders) for dedicating their time and efforts into organizing these events.  They are invaluable to me as a customer and as an end user and I know organizing sponsors, booking event locations, arranging for guest speakers and putting together agendas can be a challenging task and Angelo does a great job at doing it.  And for those who weren't there or for those that were and need a reminder be sure to stay connected with your local Toronto VMUG by following @torontovmug on twitter, joining the Toronto VMUG LinkedIn Group, and subscribe to the Toronto VMUG blog.  As always, you can get more information regarding your local VMUG at www.myvmug.org or by following @myvmug on twitter.