Tag Archives: vSphere 5

VMware launches new delta version of VCP5-DCV exam

VMware LogoToday I received a message from VMware Education Services introducing a new way for current VCP holders to refresh or re-certify before their VCP expires.  Currently as it stands, anyone holding a VCP certification prior to March 10, 2013 has only until March 10, 2015 to re-certify using one of the following methods.

  • Take the most current VCP exam in any of the available tracks (Datacenter Virtualization, Cloud and Desktop – not sure if Network Virtualization qualifies for this or not).  No matter which track you held your VCP in, all will be refreshed with another two years.
  • Take an advanced level exam, meaning the VCAP DCA or VCAP DCD.  Not only will you advance to the next level, you will refresh your VCP expiration as well.

Prior to today, these were your options.  Now however all you VCP holders have a third option, so long as you are currently hold the VCP5-DCV status.

What is a delta exam?

This is something new to VMware certifications.  Basically, this exam is based only on the differences between vSphere 5.0/5.1 and the vSphere 5.5 exams.  Also, instead of your normal 135 questions the delta exam will only have 65.  The biggest difference is how the exam is delivered – you won’t need to drive to a testing center for this one, it is being offered online through Pearson Vue – and I’m assuming this will be a similar fashion to that of the VCA delivery.  Another noticeable difference is price – this one, coming in at $120 USD instead of the normal $220 USD.

Is it worth it?

ScreamingMan-300x225This is something I can’t answer for you – you will have to go through the scenarios in your head.  Currently I have an expiry date of January 2016 for my VCP5 and honestly I’d rather sit a new version of the VCAP then do the VCP again.  That said, can I expect a VCAP6-DCA to be available by Jan 2016?  I have no idea!  Do I want to risk the chance of losing my VCP due to no new VCAP exam coming out or possibly failing the VCAP when it does come out?  It’s all a giant kerfuffle in my head right now!  One note, the email I received said it was only available to those who need to renew their VCP before March 10, 2015.  As noted above, mine was extended to Jan 2016 due the completion of my VCAP in January of this year.  That said, I went through the process of being authorized for this delta exam and had no issues getting into the portion of the Pearson Vue site which allows me to schedule it.  So, try for yourself I guess!

Time’s a wastin!

Oh yah, better hurry and make your mind up.  This delta exam will only be available until November 30th, 2014!  So you have just less than a couple of months to figure out what you are going to do!  Honestly, this whole re-certification process just confuses and puts me in a bad mood Smile  Nonetheless, though I’d share the news!  Oh, I tried to use the VMUG Advantage VCP discount code – didn’t work!

ESXi 5.5, Dell VRTX and Storage – or the lack there of!

8306.VRTX.jpg-1024x0Over the past couple of months I’ve been working on some vCO workflows to setup and configure a Dell VRTX as we are on the verge of deploying a handful of them.  Now this isn’t going to be a big post about what VRTX can do nor is it how to use vCO to set it up – I’ll save them for later – this is simply one small quirk that I’ve found when installing ESXi 5.5 onto the blades inside of the VRTX.


Small is the new big.

I shouldn’t have said small quirk – it’s somewhat of a show stopper.  If you simply throw on the vanilla ESXi 5.5 image, or even the Dell released image of ESXi 5.5 you will quickly notice that you have absolutely no storage available to you from the shared PERC controller that sits inside the VRTX.  Kind of hard to use the box with no storage 🙂

Before I go into my Dell rant if you are just looking for the solution, just scroll down to the “Driver thang” section of this post.  For the rest of us…

Since writing this Dell has released a supported version of ESXi 5.5 for the Dell VRTX blades.  Head over to Dell.com and punch in your service tags to get the image.  I’ve used and tested this and it work flawlessly 🙂  Thanks Dell!

Start-Rant -Type ‘Mini’

ESXi 5.5 is not certified on a Dell VRTX, so ultimately, you could say, it isn’t supported – not by Dell or not by VMware.  What I don’t understand here is that how Dell can release a “converged” solution, promote the crap out of it stating how great it is to run VMware on, and not support the latest release of ESXi!?!?!  I mean, this thing was released in the summer of 2014.  ESXi 5.5 was announced at VMworld in August 2013!  You would think that Dell would have the drive to hit the market with this thing supporting the latest and greatest software – but no – either way, I’m sure it will all be updated soon, and I’m sure they have their reasons – but for the meantime, here’s how to get it going…


Ain’t nuttin but a driver thang.

The fact that you don’t see storage isn’t the result of any major issue or complex problem.  It’s simply a driver.  The driver that the shared PERC uses included with the ESXi 5.5 image is just too new (?!?!?!).   However the version you need, or the version that I’ve found to work is labelled megaraid_sas version 06.801.52.00.  What the difference is between these two versions I have no idea, i just know you need 6.801.52 to make it work.  You can grab that here.

Once you have the file you are just a vib install away from VRTXing all night long.  Pick your poison when it comes to vib installs; update manager, vMA or esxcli – for the sake of not having to put too much effort into anything I’ll go over the esxcli way of installing the vib.  First things, upload that VIB to one of your datastores or an accessible area on the host.  From there, ssh in and install the vib using the following command.

esxcli software vib install -d /tmp/megaraid/megaraid_sas-06.801.52.00-offline_bundle.zip

The only thing that stands in between you and your VRTX storage now is a reboot, so go ahead and do that.

There you have it – storage!

This is only the way I’ve found to make storage work with the VRTX and 5.5 and hey, I could be crazy by doing all of this – so if you have any other suggestions, concerns, or comments I encourage them below or send me a message on Twitter – Like I said, I have a handful of these to configure so I’d rather not roll them out in some kind of crazy state 🙂

Friday Shorts – Updates, Veeam, Fevers, and Apple CIDR

Please get out of my Van Halen t-shirt before you jinx the band and they break up – Robbie (Adam Sandler) from The Wedding Singer

Horizon DaaS now available directly from VMware

VMware LogoThe "Year of VDI" has yet to hit me and my day job.  There's always been lots of talk, but no walk – I have no excuse for this – just seems like it always takes the back burner in my list of priorities.  VMware seems to continue to aquire companies, hire resources and push their VDI initiatives so I have to believe that they know something I don't (maybe I'll be less busy this year).  Anyways, a recent announcement about Horizon DaaS being available now through vCHS is the latest in VMware's VDI arsenal.  Brian Madden has an excellent article about it on his site.  I'm interested to see how well adopted the cloud desktop is knowing that it hasn't made a huge impact inside of datacenters yet.  Only time will tell.

vSphere 5.5 Update 1 is here!

For all those who have implemented the "I don't upgrade until Update 1 is released" mindset get your bits downloaded because it is here.  For the rest of you, you can now find some new functionality inside of flagship hypervisor – you know, that little thing called VSAN.  VSAN support, which was once only available to a small subset of 12,000 beta testers is now baked into the update 1 release of ESXi.  The other benefit of 5.5 U1 comes in the form of cloud management.  The vCHS vSphere Web Client plug-in is now available within the Inventories section inside your web client – allowing you to view all of your dedicated and virtual private cloud instances from vCHS, from the comfort of your web client.

Pour yourself some CIDR and read this.

Chris Wahl has a great post on Wahl Network where he compares simple networking concepts to his professional career development.  Judging by the successes Chris has had, I would recommend listening to what he has to say on how you can move further in your carreer!!!

I got a fever and the only prescription is more Visio Stencils

feverVeeam has a nice collection of free Visio Stencils for both VMware and Hyper-V.  For those that fancy nice little diagrams and designs you should probably go and add these to your arsenal.  Also if you are running multi hypervisors, this gives you a nice consistent look and feel accross the diagrams that you are creating.  Thanks Veeam!

Around the world

veeam_logoSpeaking of Veeam, they have a pretty cool contest about to start!  In celebration of their 100,000th customer, yes, that's 6 digits Veeam is giving away a trip for 2 to ANYWHERE IN THE WORLD!.  How do you enter?  Simply head over to the link above and guess where you think that Veeams 100,000th customer will be located.  And hey, in general Veeam fashion they have a ton of cool prizes for the runner ups as well.  Google Glass, iPADs and Surface Pro's are a plenty.


8 weeks of #VCAP – CDP and LLDP

Well, 8 weeks of VCAP has dwindled down into a serious 8 days of VCAP – and for now, how about a little bit of random information from the Networking section of the blueprint.

First up, CDP and LLDP

These are relatively easy to configure, however there are a few different modes that they can be run in, therefore I thought it would be best if I write them down in hopes that maybe I’ll remember them if any scenarios require me to configure them.

Basically the functionality of the two protocols is identical – they both provide discovery of ports connected to a virtual switch.  CDP however supports just Cisco physical switches whereas LLDP supports any switch supporting LLDP.  Another note, CDP can be enabled on both vSphere Standard Switches and vSphere Distributed Switches – LLDP – dvSwitch only!

So let’s have a look at the dvSwitch config first.  Like I mentioned earlier it’s pretty simple. From the properties tab of a vSphere Distributed Switch select ‘Advanced’.  From here its as simple as setting the status to Enabled, the type to either CDP or LLDP, and the Operation mode (explained below).

  • Listen – ESXi detects and displays information from the associated physical switch port, but all information in regards to the virtual switch is not available to the physical switch.
  • Advertise – ESXi presents information in regards to the virtual switch available to the physical switch, but doesn’t detect any information in regards to the physical switch port
  • Both – Does both advertise and listen.


Now that we are enabled we can view what information we receive inside of the Networking section of a hosts configuration tab.  To do so, simply expand out your physical uplinks and click the information icon (shown below).


And that’s all there is for that – with the distributed switch anyways.  To get CDP working on a standard switch we are once again back into the command line interface.  Probably good to brush up on these commands anyways since its also mentioned in the blueprint.  So, Let’s say we wanted to configure CDP on a vSphere Standard Switch called vSwitch0 to a value of Both.  We could use the following command

esxcli network vswitch standard set –v vSwitch0 –c both

And that’s all there is to that – valid options for –c would be both, listen, advertise or down.  To view we could use the same process as above.

8 weeks of #VCAP – Section 3 scenario – CPU Affinity!

Thanks once again to Tom Verhaeg for this great scenario.

The voice team has recently setup Cisco Unity. The VoIP administrator sends you an e-mail. To comply with Cisco best practices, the Cisco Unity VM needs to have CPU affinity set. You really don’t like this, but the VoIP administrator and your boss insist. Make it happen……..

Damn, this really isn’t a fun thing to do. CPU affinity restricts a VM only to run on specific cores / processors that you specify. There may be some requirements for this (such as the above), but overall you shouldn’t do it. This breaks NUMA architecture, and more important, Fully Automated DRS! To support this, the DRS level should either be manual or partially automated.

The process itself isn’t that complicated. Edit the settings of the VM and go to the resources tab. Under advanced CPU, you find the option for CPU affinity.


If you do not see the Scheduling Affinity piece on a DRS-Cluster host, you are running DRS in fully automated mode. You can set DRS to manual for this VM by going to the cluster settings, and under DRS select Virtual Machine options. Set the DRS mode for this VM to either disabled, manual or partially automated.


8 weeks of #VCAP – More Networking Scenarios by Tom!

Another top notch scenario built by Tom Verhaeg! (blog/twitter)  Thanks Tom!

Your recent work on the new portgroup was top notch! Now, the network administrators have some new requirements. You currently use one vNIC for the DvS. A second pNIC has been connected to the network and you have been tasked with adding it to the DvS. Also ensure that the DvS_StorageNetwork Port Group only uses the new pNIC and does VLAN tagging on VLAN ID 20.

Another networking objective. Whoohoo! Allright, let us first check out the current network adapters available on the host:


Allright, so vmnic2 is the one that we can add to the DvS_AMS01. Go over to the networking view (Ctrl + Shift + N) and edit the settings of your DvS. We first need to check if the DvS allows for 2 uplinks, instead of just 1.


And check this out! It’s still set to 1. This is a good one to remember for the exam, on the DvS object itself, you configure the maximum number of physical adapters (also called uplink ports) per host. So set that one to 2 and let’s continue with adding vmnic2 to the DvS.

Since the host is already connected to the DvS, click the DvS and select Manage Hosts. You will find your host, and you can add the second nic.


You could also do this from the hosts and clusters view, do whatever works for you.

Now that we have added that pNIC to the DvS, we need to create the DvS_StorageNetwork port group. Remember that we need to do VLAN tagging on VLAN ID 20 here. Create the new port group now, it’s settings should look like this:


Now, for the last part: As ESXi does load balancing by default (originating port ID based) we will now have load balancing on the DvS_ProductionNetwork, which is great, but not what we need for the Storage Network.

Open up the settings of that port group and go to the Teaming and Failover section.


Both uplink ports are now under Active Uplinks. Let’s review real quick what the options are:

Active Uplinks – actively being used for traffic flow

Standby Uplinks – will only become active until a failure occurs on one of the active uplinks

Unused Uplinks – this adapter will never be used for this port group

We need to ensure that it will never use this uplink, so move the dvUplink1 over to the Unused Uplinks. It should then look like this:


8 weeks of #VCAP – Network Scenario by @tomverhaeg

First off I want to thank Tom Verhaeg (blog/twitter) for providing this scenario.  Tom had gotten in contact with myself and wanted to do what he can to help our with the 8 weeks of #VCAP series as he is going through a similar type process as me in studying for the VCAP5-DCA.  So props to Tom for taking the time and initiative to give back.  Hopefully we see more from him in the coming weeks!  Even better for myself as I can run through some scenarios that I didn't make up 🙂  Be sure to follow Tom on Twitter and check out his blog  Thanks for the help Tom!!!

Your company leverages the full Enterprise Plus licensing and has set up a Distributed vSwitch. Recently, the number of ports needed on a particular portgroup exceeded the number configured. You are tasked with creating a new Portgroup, called DvS_ProductionNetwork which only connects the running VM’s and also functions when vCenter is down.

Off we go again. So, let’s recall. There are 3 different options of port binding on a DvS. 

Static binding – Which creates a port group with a manual set number of ports. A port is assigned whenever a vNIC is added to a VM. You can connect a vNIC static binding only through vCenter.

Dynamic binding (Deprecated in vSphere 5.0!) – A port is assigned to a vNIC when the VM is powered on, and it’s vNIC is in a connected state. You can connect this dynamic binding only through vCenter.

Empheral binding – A port is assigned to a vNIC when the VM is powered on, and it’s vNIC is in a connected state. This binding method allows the bypass of vCenter, allowing you to manage virtual machine networking when vCenter is down.

So, that’s the one we need! Empheral binding! Luckily, it’s quite simple to configure. Hop over to the networking inventory (Ctrl + Shift + N) and create the new port group. Give it a name and leave the number of ports on the default of 128.

Now edit the settings of this port group, and select the Empheral binding under the port binding dropdown. Also note, that the number of ports is greyed out now.



8 weeks of #VCAP – Storage Scenarios (Section 1 – Part 2)

Hopefully you all enjoyed the last scenario based post because you are about to get another one 🙂  Kind of a different take on covering the remaining skills from the storage section, section 1.  So, here we go!

Scenario 1

A coworker has come to you complaining that every time he performs storage related functions from within the vSphere client, VMware kicks off these long running rescan operations.  He's downright sick of seeing them and wants them to stop, saying he will rescan when he feels the need to, rather than having vSphere decide when to do it.  Make it happen!

So, quite the guy your coworker, thinking he's smarter than the inner workings of vSphere but luckily we  have a way we can help him.  And also the functions we are going to perform are also part of the VCAP blueprint as well – coincidence?  Either way, the answer to our coworkers prayers is something called vCenter Server storage filters and there are 4 of them, explained below…

RDM Filter (config.vpxd.filter.rdmFilter) – filters out LUNs that are already mapped as an RDM

VMFS Filter (config.vpxd.filter.vmfsFilter) – filters out LUNs that are already used as a VMFS datastore

Same Hosts and Transports Filter (config.vpxd.filter.sameHostsAndTransporstFilter) – Filters out LUNS that cannot be used as a datastore extent

Host Rescan Filter (config.vpxd.filter.hostRescanFilter) – Automatically rescans storage adapters after storage-related management functions are performed.

As you might of concluded it's the Host Rescan Filter that we will need to setup.  Also, you may have concluded that these are advanced vCenter Server settings, judging by the config.vpxd prefixes.  What is conclusive is that all of these settings are enabled by default – so if we need to disable one, such as the Host Rescan Filter, we will need to set the corresponding key to false.  Another funny thing is that we won't see these setup by default.  Basically they are silently enabled.  Anyways, let's get on to solving our coworkers issue.

Head into the advanced settings of vCenter Server (Home-vCenter Server Settings->Advanced Options).  From here, disabling the host rescan filter is as easy as adding the config.vpxd.filter.hostRescanFilter and false values to the text boxes near the bottom of the screen and clicking 'Add' – see below

hostrescanfilterAnd voila!  That coworker of yours should no longer have to put up with those pesky storage rescans after he's done performing his storage related functions.

Scenario 2

You work for the mayors office in the largest city in Canada.  The mayor himself has told you that he installed some SSD into a host last night and it is showing as mpx.vmhba1:C0:T0:L0 – but not being picked up as SSD!  You mention that you think that is simply SAS disks but he persists it isn't (what is this guy on crack :)).  Either way, you are asked if there is anything you can do to somehow 'trick' vSphere into thinking that this is in fact an SSD.

Ok, so this one isn't that bad really, a whole lot of words for one task.  Although most SSD devices will be tagged as SSD by default there are times when they aren't.  Obviously this datastore isn't an SSD device, but the thing is we can tag it as SSD if we want to.  To start, we need to find the identifier of the device we wish to tag.  This time I'm going to run esxcfg-scsidevs to do so (with -c to show a compact display).

esxcfg-scsidevs -c

From there I'll grab the UUID of the device I wish to tag, in my case mpx.vmhba1:C0:T0:L0 – (crazy Rob Ford).  Now if I have a look at that device with the esxcli command I can see that it is most certainly not ssd.

esxcli storage core device list -d mpx.vmhba1:C0:T0:L0

ssd-noSo, our first step is to find out which SATP is claiming this device.  The following command will let us do just that

esxcli storage nmp device list -d mpx.vmhba1:C0:T0:L0

whichsatpAlright, so now that we know the SATP we can go ahead and define a SATP rule that states this is SSD

​esxcli storage nmp satp rule add -s VMW_SATP_LOCAL -d mpx.vmhba1:C0:T0:L0 -o enable_ssd

And from here we need to reclaim the device

esxcli storage core claiming reclaim -d mpx.vmhba1:C0:T0:L0

And, another look at our listing out of the device should now show us that we are dealing with a device that is SSD.

esxcli storage core device list -d mpx.vmhba1:C0:T0:L0

ssd-yesSo there you go Mr. Ford, I mean Mr. Mayor – it's now SSD!!!!

And that's all for now 🙂

8 weeks of #VCAP – Random Storage Scenarios (Section 1 – Part 1)

So my 8 weeks of #VCAP is quickly turning into just under 4 weeks of #VCAP so as I attempt to learn and practice everything on the blueprint you might find that I'm jumping around quite a bit.  Also, I thought I would try presenting myself with a scenario with this post.  Now all of the prep for the scenario is made by myself, therefore it's a pretty simple thing for me to solve, but none the less it will help get me into the act of reading a scenario and performing the tasks that are on it.  So, this post will cover a bunch of random storage skills listed in Objective 1 of the blueprint – without ado, the scenario

Scenario 1

Let's say we've been tasked with the following.  We have an iSCSI datastore (iSCSI2) which utlizes iSCSI port bonding to provide multiple paths to our array.  We want to change the default PSP for iSCSI2 from mru to fixed, and set the preferred path to travel down CO:T1:L0 – only one problem, C0:T1:L0 doesn't seem to be available at the moment.  Fix the issues with C0:T1:L0 and change the PSP on iSCSI2 and set the preferred path.

​Alright, so to start this one off let's have a look first why we can't see that second path to our datastore.  If browsing through the GUI you aren't even seeing the path at all, the first place I would look at is claimrules (now how did I know that 🙂 ) and make sure that the path isn't masked away – remember the LUN Masking section.  So ssh on into your host and run the following command.

esxcli storage core claimrule list


As you can see from my output lun masking is most certainly the cause of why we can't see the path.  Rule 5001 loads the MASK_PATH plugin on the exact path that is in question.  So, do you remember from the LUN Masking post how we get rid of it?  If not, we are going to go ahead and do it here again.

First step, we need to remove that rule.  That's done using the following command.

esxcli storage core claimrule remove -r 5001

Now that its gone we can load that current list into runtime with the following command

esxcli storage core claimrule load

But we aren't done yet!  Instead of waiting for the next reclaim to happen or the next reboot, let's go ahead and unclaim that path from the MASK_PATH plugin.  Again, we use esxcli to do so

esxcli storage core claiming unclaim -t location -A vmhba33 -C 0 -T 1 -L 0

And rescan that hba in question – why not just do it via command line since we are already there…

esxcfg-rescan vmhba33

And voila – flip back into your Manage Paths section of iSCSI2 and you should see both paths are now available.  Now we can move on to the next task, which is switching the PSP on iSCSI2 from MRU to Fixed.  Now we will be doing this a bit later via the command line, and if you went into the GUI to check your path status, and since we are only doing it on one LUN we probably can get away with simply changing this via the vSphere Client.  Honestly, it's all about just selecting a dropdown at this point – see below.

managepathsI circled the 'Change' button on this screenshot because it's pretty easy to simply select from the drop down and go and hit close.  Nothing will happen until you actually press 'Change' so don't forget that.  Also, remember, PSP is done on a per-host basis.  So if you have more than one host and the VCAP didn't specify to do it on only one host, you will have to go and duplicate everything you did on the other host.  Oh, and setting the preferred path is as easy as right-clicking the desired path and marking it as preferred.  And, this scenario is completed!

​Scenario 2

The storage team thanks you very much for doing that but requirements have changed and they now wish for all of the iSCSI datastores, both current and any newly added datastores, to utilize the Round Robin PSP.  How real life is that, people changing their mind 🙂

No problem you might say!  We can simply change the PSP on each and every iSCSI datastore – not a big deal, there's only three of them.  Well, you could do this, but the question specifically mentions that we need to have the PSP set to Round Robin on all newly added iSCSI datastores as well, so there's a bit of command line work we have to do.  And, since we used the vSphere Client to set the PSP in the last scenario, we'll do it via command line in this one.

First up, let's switch over our existing iSCSI datastores (iSCSI1, iSCSI2, iSCSI3).  To do this we will need their identifier which we can get from the GUI, however since we are doing the work inside the CLI, why not utilize it to do the mappings.  To have a look at identifiers and their corresponding datastore names we can run the following

esxcfg-scsidevs -m

maptodatastoreAs you can see there are three datastores we will be targeting here.  The identifier that we need will be the first string field listed beginning with t10 and ending with :1 (although we don't need the :1).  Once you have the string identifier of the device we want to alter we can change its' PSP with the following command.

esxcli storage nmp device set -d t10.FreeBSD_iSCSI_Disk______000c299f1aec010_________________ -P VMW_PSP_RR

​So, just do this three times, once for each datastore.  Now, to handle any newly added datastores to defaulr to round robin we need to first figure out what SATP the iSCSI datastores are utilizing, then associate the VMW_PSP_RR PSP to it.  We can use the following command to see which SATP is associated with our devices.

esxcli storage nmp device list

defaultsatpAs you can see, our iSCSI datastores are being claimed by the VMW_SATP_DEFAULT_AA SATP.  So, our next step would be to associate the VMW_PSP_RR PSP with this SATP – I know, crazy acronyms!  To do that we can use the following command.

esxcli storage nmp satp set -s VMW_SATP_DEFAULT_AA -P VMW_PSP_RR

This command will ensure that any newly added iSCSI datastores claimed by the default AA SATP will get the round robin PSP.

At this point we are done this scenario but while I was doing this I realized there might be a quicker way to to change those PSP's on our existing LUNs.  If we set associate our SATP with our PSP first then we can simply utilized the following command on each of our datastores to force them to change their PSP back to default (which will be RR since we just changed it).

esxcli storage nmp device set -d t10.FreeBSD_iSCSI_Disk______000c299f1aec010_________________ -E

Of course we have to run this on each datastore as well – oh, and on every host 😉

Scenario 3

Big Joe, your coworker just finished reading a ton of vSphere related material because his poor little SQL server on his iSCSI datastore just isn't cutting it in terms of performance.  He read some best practices which stated that the max IOPs for the Round Robin policy should be changed to 1.  He requested that you do so for his datastore (iSCSI1).  The storage team has given you the go ahead but said not to touch any of the other datastores or your fired.

Nice, so there is really only one thing to do in this scenario – change our default max IOPs setting for the SCSI1 device.  So, first off, let's get our identifier for SCSI1

​esxcfg-scsidevs -m

Once we have our identifier we can take a look on the roundrobin settings for that device with the following command

esxcli storage nmp psp roundrobin deviceconfig get -d t10.FreeBSD_iSCSI_Disk______000c299f1aec000_________________

rr-getinfoAs we can see, the IOOperation Limit is 1000, meaning it will send 1000 IOPs down each path before switching to the next.  The storage team is pretty adamant we switch this to 1, so let's go ahead and do that with the following command.

esxcli storage nmp psp roundrobin deviceconfig set -d t10.FreeBSD_iSCSI_Disk______000c299f1aec000_________________ -t iops -I 1

Basically what we define with the above command is that we will change that 1000 to 1, and specify that the type of switching we will use is iops (-t).  This could also be set with a -t bytes and entering the number of bytes to send before switching.

So, that's basically it for this post!  Let me know if you like the scenario based posts over me just rambling on about how to do a certain task!  I've still got lots more to cover so I'd rather put it out there in a format that you all prefer!  Use the comments box below!  Good Luck!

8 weeks of #VCAP – iSCSI Port Binding

My plan is to go over all the skills in Objective 1.3 but before we get into PSA commands and what not let's first configure iSCSI port bonding – this way we will have a datastore with multiple paths that we can fiddle around with 🙂

First off iSCSI port binding basically takes two separate paths to an iSCSI target (the paths are defined by vmkernel ports) and bonds them together.  So, we need two vmkernel ports.  They can be on the same switch or separate switches, but the key is that you can only have one network adapter assigned to it.  Meaning the vSwitch can contain multiple nics, but you need to ensure that the config is overridden on the vmkernel level to only have one NIC active.  Let's have a look at this.  Below you will see the current setup of my vmkernel ports (IPStore1 and IPStore2).


As you can see, my configuration here is actually wrong and needs to be adjusted – remember, one nic per vmkernel port.  So, with a little click magic we can turn it into what you see below.


Basically, for IPStore1 I have overridden the default switch config on the vmkernel port, setting vmnic0 as active and vmnic1 as unused.  For IPStore2 we will do the same except the opposite (hehe, nice, that makes no sense) – basically, override but this time set vmnic1 as active and vmnic0 as unused.  This way we are left with two vmkernel ports, each utilizing a different NIC.

Now that we have the requirements setup and configured we can go ahead and get started on bonding the vmkernel ports together.  This is not a hard thing to do!  What we are going to want to do is right-click on our software iSCSI initiator and select 'Properties'.  From there we can browse to the 'Network Configuration' tab and simply click 'Add'.  We should now see something similar to below.


As you can see above, our VMkernel adapters are listed.  If they weren't, that would indicate that they are not compatible to be bonded, meaning we haven't met the requirements outlined earlier.  By selecting IPStore1 and then going back in and selecting IPStore2 ( I know, you can't do it at the same time 🙂 ), then selecting OK, then performing the recommended rescan you will have completed the task.  We can now see that below inside of our 'Manage Paths' section for a datastore that has been mounted with our iSCSI initiator we have some nifty multipath options.  First, we have an additional channel and path listed, as well, we are able to switch our PSP to thinks like Round Robin!


And kapow!  That's it!  We are done!  In the next post we will look at how to perform some PSP/PSA related commands against this bad boy!  

Holy crap the book is done – Troubleshooting vSphere Storage is available!

As some of you may now for the past, what feels like years but is probably closer to 6 months or so I have been working on a book project revolving around troubleshooting storage in a vSphere environment.  At last I'm happy to say that the book is finally published and sitting on a variety of websites (Packt, Amazon) waiting to be purchased and consumed by you 🙂 !  The book, cleverly titled 'Troubleshooting vSphere Storage' is 150 pages straight to the point exercises that a vSphere admin can take when dealing with storage visibility, contention, and capacity issues.


Early on when I was pondering the idea of doing this I had no idea about the amount of work and time commitment that writing a book would consume!  I most certainly have a new found respect for the rock stars that are putting out 500 page books out there!  It really takes a major commitment from the authors, reviewers, and editors to get everything done!  Speaking of reviewers, my technical reviewers, Angelo Luciani ( blog / twitter ), Jason Langer ( blog / twitter ), and Eric Wright ( blog / twitter ) were key to me actually finishing this project.  Their feedback was awesome and without it, well, who knows what state the book would be in. So a big thanks goes out to them for all their help!

Needless to say I'm pretty excited to have a published piece of work out there – and if it helps just one person, well, then I guess I've done what I set out to do 🙂