Whilst in the lab recently I came into a bit of scuffle with vSAN and it’s claimed disks. Basically, I had a cluster in which I wanted to configure an All-Flash vSAN instance on, however, the cluster in question had already been configured in that state once before. Once the hosts had been split up and redeployed I went back in to configure a couple different instances of vSAN. As you know, the first step is marking those undetected SSDs as flash within vSAN, however, when I attempted to do so the following error was displayed.
Not the most descriptive of all errors – just a simple “Cannot change the host configuration”. A bit more digging into the actual event and this was discovered.
A little bit better – Now knowing that for some reason the disk is still claimed on the actual host we can at least determine that the problem is within the host itself. Running the following command on the host shows that the disk in question indeed does belong to a vSAN disk group still.
esxcli vsan storage list
Above only shows one of the five disks I was looking to reconfigure into a new vSAN disk group – but at this point I thought I’d take it one-by-one and remove the disks. After running the command which I thought would work, esxcli vsan storage remove -s <disk_id> I was left with yet again, another useful error message…
Hmm, Unable to complete Sysinfo operation – ok! Let’s try it with the -d option, which is used for magnetic disks (as this disk no longer was flagged as flash).
Well this is much better! At least we are getting some information around what “isn’t” happening and why! The disk group in question was still kicking around, and in turn was configured with compression and deduplication. So the next logical operation, given that I don’t care about the old disk group configuration or anything that may be on the disks was to blow away the whole diskgroup itself. This is done by specifing the –uuid or -u option with the same command, but pointing the actual diskgroup uuid instead of an individual disk.
esxcli vsan storage remove –uuid <disk_group_uuid>
After running the above command, getting the uuid of the VSAN DISK GROUP field when listing out the devices I re-ran this list command.
As you can see, we no longer list out any devices, disks, or disk groups within vSAN. This should hopefully mean that our disks are available once again for re-use in other scenarios – in my case, vSAN again! Heading back into the vSphere client and attempting to mark the disk as flash now will succeed! On a side note, don’t forget to clear off any partition information that may still exist on these disks otherwise they will remain to unavailable.
Now I’m up and rocking again with my vSAN configuration! Just wash, rinse, repeat on the remaining hosts you wish to re-use and you should be good to go! Not a super technical post, but I always like to throw any break/fix scenarios I run into whenever I can – so here’s hoping this helps someone else in the future! Thanks for reading!