Last month I published a post in regards to the Dell VRTX, ESXi 5.5, and storage – or the lack thereof. Well shortly after publishing that article Dell announced full support for ESXi 5.5 and released an ESXi 5.5 image on their website for those looking to upgrade or install. In the chance that my little driver work around might affect support I decided I’d better pull the image down and get it installed on the few VRTX’s I had already deployed.
That said, looking at the build numbers and attempting to not have to redo all of the configuration I had already applied, I decided to take the upgrade route – even though the only difference was most likely the storage driver. The upgrade process itself went smooth – no issues, no problems. But after it was complete guess what was missing? Yup, the datastore was gone again!
Where’s Waldo?
Now this wasn’t the same issue of missing storage that I described the last post. Previously I couldn’t see the storage at all, this time, when looking at my Storage Adapters I could actually see the device that hosted my datastore listed. So it was off to the CLI to see if I could get a little more information about what was going on.
To the CLI Batman!
After doing some poking around I discovered that the volume was being detected as a snapshot/replica. Why did this happen? I have no idea – maybe its the fact that I was messing around with the storage drivers 🙂 I guess that’s why they say things are supported and unsupported 🙂 Either way, how I found this out was with the following command.
1 |
esxcli storage vmfs snapshot list |
This command actually displayed the volume that I was looking for, and more specifically showed that it was mountable. So my next step was to actually mount that volume again. Take caution here if you are doing the same. I know for sure that this volume is the actual volume I’m looking for – but if you have an environment with lots of lun snapshots/replica’s you will want to ensure that you never mount duplicate volumes with the same signature – strange things can happen. Anyways, to mount the volume, take note of the VMFS UUID and we can use the following command.
1 |
esxcli storage vmfs snapshot mount -u 5315e865-0263a58f-413a-18a99b8c1ace |
And with that you should now have your Dell VRTX storage back online – everyone is happy and getting along once again – Thanks for reading!
Just FYI we had a very similar issue after updating to 5.1U2 (from the recently DELL-released image build 1483097) and your procedure also solved our problem. Many thanks !
Your welcome! Glad it all worked out
So when I use command
“esxcli storage vmfs snapshot list”
I get no return, the cli immediately jumps down to another line so I can type another command,
Im on ESXI 5.5 with update patches 1 and 2, using the Dell VRTX 620 blades. Been stuck on this for days! Would appreciate any help!
nvm I just found this:
http://www.dell.com/support/home/us/en/04/Drivers/DriversDetails?driverId=5YC4T
I’ll post back if this solved my issue….
Did you do a storage rescan after the snapshot command, and then try the snapshot command again?
Sometimes this helps, but it seems to me like you actually applied the Dell image, which means you re-installed
Also, might be a good idea to make sure you have the latest firmware?
Just thoughts!
Thanks I Ended up realizing Dell had multiple custom vmware ISO’s, I later ran into another issue with the network card and distributing resources.
Im sorry to say we went with Hyper V and the setup and install was allot easier also the VRTX runs beautifully! Only needed to install the SAS driver on windows server which is as easy as double clicking 😮
Hi, I’m desperate for some advice. I recently bought a VRTX with just 1 M520 node. After installing Dell customized ESXi 5.5 Update 2 image, the hypervisor simply has no network.
DHCP was unsuccessful, set static IP to a valid one but still Test management network shows ping fail to the switch on VRTX as well as my gateway.
Neither can any of the computers on my network ping the ESXi host. I’ve scoured the net for solution to no avail. tg3 drivers are already latest, and I’ve also tried patching ESXi to the latest according to the deployment guide provided by Dell. All ports in the vrtx switch are set to VLAN ID 1 which was how its been since delivered and I didn’t change any chassis configuration. I tried setting ESXi to vlan 1 too but it doesn’t work. I’m at a lost. Is there anything I need to do to configure the physical switch of VRTX to allow ESXi to communicate with it?
Sorry for the rant and I don’t mean to go off topic from your article but like I said, I’m desperate. Thank you for reading.
Problem solved. Turns out the integrated R1-2401 switch firmware needed an upgrade from 1.0.0.62 to 2.0.0.46.
It was a mind boggling ordeal for me as everything seemed to checks out for the old firmware and this new version of firmware isn’t specifically listed in VRTX drivers list to fix this issue.
In case it might help someone facing the same situation as mine, you can download the new firmware for your VRTX switch here. You’ll need to incrementally upgrade to 1.0.0.63 before moving up to 2.x versions but all that is included in the zip bundle, just follow the pdf instructions and you’re all set.
http://www.dell.com/support/home/us/en/04/Drivers/DriversDetails?driverId=M12MC
Regards,
Zen
I just ran into this with my VRTX after upgrading the raid controller firmware. I had already upgraded to 5.5 and had things working with the 6.801.52 driver but then after applying the latest firmware the storage dropped off. I could see the controller in vmware under storage devices and it would come up under add storage ( I chose add with same ID) but it would not actually mount it.