8 weeks of #VCAP – LUN Masking

Alright – here we go – the push is on.  8 weeks to cover some random, sparingly used topics off of the VCAP5-DCA blueprint.  Today, let's tackle an item out of the very first objective on the blueprint; LUN masking.

LUN masking is essentially a process that will mask away LUNs, or make those LUNs inaccessible to certain ESXi hosts.  You know when you go into your backend array and say which hosts can have access to which LUNs – yeah, that's basically LUN masking.  However, for the sake of this exam, it's performed on the host itself through something called claimrules.  That said, it's much harder, but explained below…

So first off, we need to decide on a LUN that we want to mask.  There are many ways to list all of your LUNs/Datastores through the CLI and through the vSphere Client, so pick your beast.  What we need to get is the LUNs identifier – the long string of characters that ESXi uses to uniquely identify the LUN.  Since the claimrule is create within the CLI we might as well just find these numbers inside of the CLI as well – since you may be pressed for time on the exam.  So, let's first list our LUNs, showing each identifier.

esxcli storage core device list | less


As you can see I piped the output to less.  If we don't do this and there are a lot of LUNs attached to your host then you may get a little overwhelmed with the output.  "esxcfg-scsidevs -m" will also give you some great information here, which may be a little more compact than the esxcli command.  Chose your weapon, so long as you can get the identifier.  The LUN shown in the above image has an identifier of "naa.6006048c6fc141bb051adb5eaa0c60a9" – this is the one I'm targeting.

So now we have our identifier it's time to do some masking.  We have some decisions at this point to make though.  We can mask by path (removing individual path visibility), by vendor (this will mask all LUNs to a specific vendor), or by storage transport (yeah, like all iSCSI or all FC).  If we look at the currently defined claimrules we can see most types are utilized.  To do so, use the following command

esxcli storage core claimrule list


For our sake here we will go ahead an perform our masking by path.  I will note below though if you were to choose vendor or transport where that would be setup.

So, in order to do it by path, we need to see all of the paths associated with our identifier.  To do so, we can use the following command along with grepping for our identifier.

esxcfg-mpath  -m | grep naa.6006048c6fc141bb051adb5eaa0c60a9

lunmask3Alright, so you can see we have 2 paths.  That means in order to completely mask away this LUN we will need to do all of the following twice; once using the vmhba32:C1:T0:L0 path and once using vmhba32:C0:T0:L0.

Now, time to begin constructing our claimrule!  First off we will need an ID number.  Certailny don't use one that is already taken (remember "esxcli storage core claimrule list") or you can use the "-u" to autoassign a number.  I like to have control over this stuff so I'm picking 200.  Also to note is the -t option – this specifies the type of claimrule (remember when i said we could mask by vendor).  Our -t to do a path will be location, however this could be vendor or transport as well.** Running "esxcli storage core claimrule add" with no arguments will output a bunch of examples **  So, in order to mask by location we will specify -A, -C, -T, and -L parameters referencing our path and the -P states we want to use the MASK_PATH plugin.  The command should look like the one below.

esxcli storage core claimrule add -r 200 -t location -A vmhba32 -C 1 -T 0 -L 0 -P MASK_PATH

and for our second path – don't forget to put a new rule ID

esxcli storage core claimrule add -r 201 -t location -A vmhba32 -C 0 -T 0 -L 0 -P MASK_PATH

Running "esxcli storage core claimrule list" will now show our newly created rules, however they haven't been applied yet.  Basically they are running in "file" – we need them to be in "runtime"  This is as as easy as running

esxcli storage core claimrule load

Now we are all set to go – kinda.  They are in runtime, but the rules will not be applied until that device is reclaimed.  So, a reboot would work here – or, a more ideal solution, we can run a reclaim on our device.  To do so we will need that device identifier again and the command to run is…

esxcli storage core claiming reclaim -d naa.6006048c6fc141bb051adb5eaa0c60a9

And done!  And guess what – that LUN is gonzo!!!  Congrats Master Masker!

HEY!  Wait!  I needed that LUN

Oh SNAP!  This is my lab environment and I need that LUN back, well, here's how we can undo everything we just did!

First off, let's get rid of those claimrules we just added

esxcli storage core claimrule remove -r 200
esxcli storage core claimrule remove -r 201

Listing them out will only show them in runtime now, they should no longer be in file. Let's get them out runtime by loading our claimrule list again.

esxcli storage core claimrule load

Now a couple of unclaim commands on our paths.  This will allow them to be reclaimed by the default plugin.

esxcli storage core claiming unclaim -t location -A vmhba32 -C 0 -T 0 -L 0
esxcli storage core claiming unclaim -t location -A vmhba32 -C 1 -T 0 -L 0

A rescan of your vmhba and voila!  Your LUN should be back!  Just as with Image Builder I feel like this would be a good thing to know for the exam.  Again, it's something that can easily be marked and tracked and very specific!  Happy studying!

  • Josh Coen

    Great post Mike. Here’s a video demonstration of LUN masking in case your readers are interested.


    • mwpreston

      Great video Josh – and for those that might stumble in here Josh also has a great study gude located here http://www.valcolabs.com/vcap5-dca/ Invaluable resource for dca

  • Jason Boche

    FYI per VMware KB 1009449

    The value for parameter –rule can be any number between 101 and 200 that does not conflict with a pre-existing rule number


    • mwpreston

      Thanks Jason – I always though they were reserved πŸ™‚ Now I know! Thanks for the links back as well and congrats

  • Pingback: 8 weeks of #VCAP - Random Storage Scenarios (Section 1 - Part 1) | mwpreston.net()

  • ccieinferno

    If a ESXi 5 host is connected to Multiple storage arrays: EMC/Netapp/Dell etc. We want to mask all paths to particular storage array only. For example lets take we want to mask all paths from ESXi 5 host to EMC array only. What will be the command for the same. Also assume host has 2 FC adapters vmhba2 and vmhba3

    • mwpreston

      Hey – thanks for the comment. You can mask luns by vendor and/or model by changing the -t parameter to vendor – IE, To mask all paths to an array from vendor MyVendor and model MyModel I would use

      esxcli storage core claimrule add -u -t vendor -V MyVendor -M MyModel -P MASK_PATH

      To list out your vendors and models to get the correct string you can use

      esxcli storage core device list

      Hope this helps πŸ™‚


      • ccieinferno

        Thanks for the reply. Here this u parameter is for what.
        Also do we need to add any core claiming unclaim/reclaim command also.

        • mwpreston

          -u just assigns and automatic rule id, so we don’t have to do that…

          and yes, you will need to unclaim those devices

          esxcli storage core claiming unclaim -t vendor -v MyVendor

          and then rescan your storage adapters – also, not this time the lowercase v is used πŸ™‚ Nice eh?

          • ccieinferno

            Thanks Preston. Your site is really helping me to prepare for VCAP5 DCA. I have already purchased your Troubleshooting Book. Keep it up with the good work.

          • ccieinferno

            Hi Preston,
            I successfully removed the storage array from the host using above commands.

            How do i bring the storage array back.
            I removed the core claimrule which i added.But storage array is not visible.

  • independent_forever

    Interesting. I never knew what they meant by LUN masking at the HOST as I always use the back-end SAN utilities to simply remove the host from the storage group to avoid incorrectly writing to datastores. I often use this when building hosts and rarely need to mask single LUNS from specific HOSTs although I’m sure there are reasons. I still prefer working at the SAN level and not involving ESX hosts themselves as this seems inefficient even if they do have it on the exam…great info though.

    • mwpreston

      I’m the same way, always use the tools provided by the array vendors to unpresent LUNs from hosts.

      That said, it’s listed on the blueprint so I covered it! As for a use case, I’m still trying to find one πŸ™‚

      Thanks for the comment

      • independent_forever

        Always good to know alternatives should the need arise. I just couldn’t think of any reason to mask individual LUNs from specific hosts so I thought I would comment…thanks for the information most appreciated.

  • Sho

    Confused! First we are reclaiming with the device identifier naa.** but unclaiming using location, why not device again?

  • Pingback: Understand and apply LUN masking using PSA-related commands | Ahmad Sabry ElGendi()