So my 8 weeks of #VCAP is quickly turning into just under 4 weeks of #VCAP so as I attempt to learn and practice everything on the blueprint you might find that I'm jumping around quite a bit. Also, I thought I would try presenting myself with a scenario with this post. Now all of the prep for the scenario is made by myself, therefore it's a pretty simple thing for me to solve, but none the less it will help get me into the act of reading a scenario and performing the tasks that are on it. So, this post will cover a bunch of random storage skills listed in Objective 1 of the blueprint – without ado, the scenario
Let's say we've been tasked with the following. We have an iSCSI datastore (iSCSI2) which utlizes iSCSI port bonding to provide multiple paths to our array. We want to change the default PSP for iSCSI2 from mru to fixed, and set the preferred path to travel down CO:T1:L0 – only one problem, C0:T1:L0 doesn't seem to be available at the moment. Fix the issues with C0:T1:L0 and change the PSP on iSCSI2 and set the preferred path.
Alright, so to start this one off let's have a look first why we can't see that second path to our datastore. If browsing through the GUI you aren't even seeing the path at all, the first place I would look at is claimrules (now how did I know that 🙂 ) and make sure that the path isn't masked away – remember the LUN Masking section. So ssh on into your host and run the following command.
esxcli storage core claimrule list
As you can see from my output lun masking is most certainly the cause of why we can't see the path. Rule 5001 loads the MASK_PATH plugin on the exact path that is in question. So, do you remember from the LUN Masking post how we get rid of it? If not, we are going to go ahead and do it here again.
First step, we need to remove that rule. That's done using the following command.
esxcli storage core claimrule remove -r 5001
Now that its gone we can load that current list into runtime with the following command
esxcli storage core claimrule load
But we aren't done yet! Instead of waiting for the next reclaim to happen or the next reboot, let's go ahead and unclaim that path from the MASK_PATH plugin. Again, we use esxcli to do so
esxcli storage core claiming unclaim -t location -A vmhba33 -C 0 -T 1 -L 0
And rescan that hba in question – why not just do it via command line since we are already there…
And voila – flip back into your Manage Paths section of iSCSI2 and you should see both paths are now available. Now we can move on to the next task, which is switching the PSP on iSCSI2 from MRU to Fixed. Now we will be doing this a bit later via the command line, and if you went into the GUI to check your path status, and since we are only doing it on one LUN we probably can get away with simply changing this via the vSphere Client. Honestly, it's all about just selecting a dropdown at this point – see below.
I circled the 'Change' button on this screenshot because it's pretty easy to simply select from the drop down and go and hit close. Nothing will happen until you actually press 'Change' so don't forget that. Also, remember, PSP is done on a per-host basis. So if you have more than one host and the VCAP didn't specify to do it on only one host, you will have to go and duplicate everything you did on the other host. Oh, and setting the preferred path is as easy as right-clicking the desired path and marking it as preferred. And, this scenario is completed!
The storage team thanks you very much for doing that but requirements have changed and they now wish for all of the iSCSI datastores, both current and any newly added datastores, to utilize the Round Robin PSP. How real life is that, people changing their mind 🙂
No problem you might say! We can simply change the PSP on each and every iSCSI datastore – not a big deal, there's only three of them. Well, you could do this, but the question specifically mentions that we need to have the PSP set to Round Robin on all newly added iSCSI datastores as well, so there's a bit of command line work we have to do. And, since we used the vSphere Client to set the PSP in the last scenario, we'll do it via command line in this one.
First up, let's switch over our existing iSCSI datastores (iSCSI1, iSCSI2, iSCSI3). To do this we will need their identifier which we can get from the GUI, however since we are doing the work inside the CLI, why not utilize it to do the mappings. To have a look at identifiers and their corresponding datastore names we can run the following
As you can see there are three datastores we will be targeting here. The identifier that we need will be the first string field listed beginning with t10 and ending with :1 (although we don't need the :1). Once you have the string identifier of the device we want to alter we can change its' PSP with the following command.
esxcli storage nmp device set -d t10.FreeBSD_iSCSI_Disk______000c299f1aec010_________________ -P VMW_PSP_RR
So, just do this three times, once for each datastore. Now, to handle any newly added datastores to defaulr to round robin we need to first figure out what SATP the iSCSI datastores are utilizing, then associate the VMW_PSP_RR PSP to it. We can use the following command to see which SATP is associated with our devices.
esxcli storage nmp device list
As you can see, our iSCSI datastores are being claimed by the VMW_SATP_DEFAULT_AA SATP. So, our next step would be to associate the VMW_PSP_RR PSP with this SATP – I know, crazy acronyms! To do that we can use the following command.
esxcli storage nmp satp set -s VMW_SATP_DEFAULT_AA -P VMW_PSP_RR
This command will ensure that any newly added iSCSI datastores claimed by the default AA SATP will get the round robin PSP.
At this point we are done this scenario but while I was doing this I realized there might be a quicker way to to change those PSP's on our existing LUNs. If we set associate our SATP with our PSP first then we can simply utilized the following command on each of our datastores to force them to change their PSP back to default (which will be RR since we just changed it).
esxcli storage nmp device set -d t10.FreeBSD_iSCSI_Disk______000c299f1aec010_________________ -E
Of course we have to run this on each datastore as well – oh, and on every host 😉
Big Joe, your coworker just finished reading a ton of vSphere related material because his poor little SQL server on his iSCSI datastore just isn't cutting it in terms of performance. He read some best practices which stated that the max IOPs for the Round Robin policy should be changed to 1. He requested that you do so for his datastore (iSCSI1). The storage team has given you the go ahead but said not to touch any of the other datastores or your fired.
Nice, so there is really only one thing to do in this scenario – change our default max IOPs setting for the SCSI1 device. So, first off, let's get our identifier for SCSI1
Once we have our identifier we can take a look on the roundrobin settings for that device with the following command
esxcli storage nmp psp roundrobin deviceconfig get -d t10.FreeBSD_iSCSI_Disk______000c299f1aec000_________________
As we can see, the IOOperation Limit is 1000, meaning it will send 1000 IOPs down each path before switching to the next. The storage team is pretty adamant we switch this to 1, so let's go ahead and do that with the following command.
esxcli storage nmp psp roundrobin deviceconfig set -d t10.FreeBSD_iSCSI_Disk______000c299f1aec000_________________ -t iops -I 1
Basically what we define with the above command is that we will change that 1000 to 1, and specify that the type of switching we will use is iops (-t). This could also be set with a -t bytes and entering the number of bytes to send before switching.
So, that's basically it for this post! Let me know if you like the scenario based posts over me just rambling on about how to do a certain task! I've still got lots more to cover so I'd rather put it out there in a format that you all prefer! Use the comments box below! Good Luck!