Tag Archives: Dell
Next in the long list of previews for Tech Field Day 12 is DellEMC – you know, that small company previously known as EMC that provides a slew of products primarily based on storage, backup, cloud and security. Yeah, well, apparently 67 billion dollars and the largest acquisition in the tech industry ever allows you to throw Dell in front of their name 🙂 November 16th will DellEMC’s first Tech Field Day presentation under the actual DellEMC name – split out we have saw Dell @ 7 events and EMC @ 5 events. So let’s call this their first rather than combining them both for that dreaded number 13….
We all got a look at just what these two companies look like when combined as the newly minted DellEMC World just wrapped up! We saw a number of announcements around how things play out while these two companies are now sharing the same playground, summarized as best I can as follows…
- Hyper-converged – Big announcements around how PowerEdge servers will now be a flavor of choice for the VxRail/VxRack deployments. Certainly this brings an element of choice in terms of the customization of performance and capacity provided by Dell – to the Hyperconverged solution once provided by EMC. Same goes for the rails big brother, VxRack.
- DataDomain – the former EMC backup storage solution will also be available on DellEMC PowerEdge servers. What was once a hardware appliance is now a peice of software bundled on top of your favourite PowerEdge servers. On top of that, some updates allowing data to be archived to cloud and multi-tenancy for service providers.
- Updates to the Isilon series, including a new All Flash version being added to the scale-out NAS system.
Dell has not be shy as of late at making BIG moves – going private then buying out EMC. Certainly this transition is far from over – there is a lot of transition that still has to take place in order to really merge the two companies together. From the outside things appear on the upside (except for the fact that I’m getting a ton of calls from both companies looking to explain everything now) however there are still many unanswered questions as to what will happen with overlapping product lines… From the inside I can’t really say – I have no idea – all I know is I’m sure it’s not an easy thing for anyone when you take 70,000 EMC employees and throw them in with Dell’s 100,000+ – There will definitely be some growing pains there…
Only time will tell how DellEMC changes the story, if at all at Tech Field Day 12. DellEMC are up first thing on November 16th – follow along with the live-stream, keep up with all things mwpreston @ Tech Field Day 12 here, and stay tuned on the official landing page for more info! This is destined to be a good one! Thanks for reading!
Have you ever tried to update the firmware of a blade chassis full of blades from start to finish? It’s an absolute pain! There are so many different components and pieces that need attention and there is about five different processes for appying updates – some working some way and other requiring a completely different process. Honestly, it’s a hot mess. Below is a method that can be performed mostly from within the GUI of the CMC and iDRACs – there are other ways of doing so by creating a bootable USB drive however I wanted to try and get the process down without using a USB key at all – and this is the best I could come up with.
Keep in mind we will be using the Dell Repository Manager to perform some of these updates, so if you don’t have that setup for your m1000e or VRTX go ahead and check out this post which explains exactly how to export your inventory and create a CIFS share to pull your firmware from. For the sake of this article I’ve applied the updates in the following order…
- CMC Firmware
- IOMINF Firmware
- iKVM Firmware
- Update iDRAC to version supporting network share updating.
- Apply any updates from the Dell Repository Manager (BIOS, LifeCycle, iDRAC, PERC, Network, Mez cards, Storage Controllers, Drive Firmware, etc…)
- Blade CPLD
- Remaining components in blades.
So with all this said let’s just get started. First up we need to update the CMC on the m1000e. This is a relativley easy process and is driven by navigating to Chassis Overview->Update on the m1000e.
You can see that the CMC is redundant, with and active and a standby component. First update the standby by selecting cmc-standby and clicking ‘Apply CMC Update’. Browse to your firmware image (ending in .cmc) and click ‘Begin Firmware Update’
This may take several minutes and depending on your initial version of CMC you may only see a simply ‘Transferring image’ message – either way just let it be and wait for it! Once the component has been upgraded, again depending on your initial CMC version you may receive a ‘Done’ status or you may be forced to log into the CMC again. You can then check that the standby firmware has indeed been updated by navigating back to the update page. At this point we can switch our active and standby firmware in order to boot to the new firmware. This is done by navigating to Chassis Overview->Troubleshooting->Reset Components and clicking the ‘Reset/Failover CMC’ button. After a little time your CMC will reboot and come back up with the newest firmware now sitting in cmc-active and the older version in cmc-standby.
At this point you can either leave your old firmware in cmc-standby in case something goes horribly wrong or you can proceed to update it as well using the same process as above. I normally go ahead and update it the second CMC in order to bring my chassis back into full redundancy but make that decision for yourself and once you do we are done with the CMC.
Next on the list to tackle is the IOMINF firmware. IOMINF is a funny one as it may or may not be visible depending on the version of the CMC you are on. This piece of hardware is essentially a bridge between the CMC and the IOM Device modules and is important to update as well. The firmware for the IOMINF is actually included within the CMC firmware package that we just installed – so if there is an update for it you will see them listed as updatable devices, if there isn’t a firmware update available, you won’t even see them listed. Kind of an odd design as I like to see everything that’s updatable within my system whether it’s up to date or not – odd! Either way you can go into the CLI and check out the IOMINF if you want to but rule of thumb is if it is listed in the GUI it needs an update, if it isn’t, you’re golden! As shown below I do indeed have an IOMINF update as it is showing within my list.
As for the update process you need to go through the listed components selecting one at a time and clicking ‘Apply IOM Update’, then ‘Begin Firmware Update’. This again will take a few minutes per component, and after each and every update you should see the respective IOM device disappear from your list. Go ahead and update each one that is listed, keeping in mind that you may lose connectivity to your environment during different updates depending on how you are configured, setup, etc…
Next is the iKVM switch – there hasn’t been any new release of this in a while, however I did see that the current release has been updated recently so for that reason, and for documentation purposes I’ll go over the install anyways. You will need to extract a bin file that is included in the .exe that is downloaded from Dell. The process of this is much the same as everything else, select your iKVM and click ‘Apply iKVM Update’, browse to your extracted .bin file and select ‘Begin Firmware Update’
Once the thrilling process of updating IOM’s, KVM and CMC’s has been completed it’s time to move on to the components that reside with the blades themselves.
We have a couple of options at this point – we could simply update each and every component one by one which could be time consuming and in my opinion just a tad bit maddening or we could use the Dell Repository Manager and connect our CMC to a prebuilt catalog and let it detect and install the updates. I would recommend doing the the latter if possible! Saves you the hassle of trying to find and download all the right files. If you don’t have Dell Repository Manager setup then have a look at this post where I explain how to get a repo setup specific to the inventory that you have within your m1000e. One of the prerequisites of using the CIFS/Repository method is that we need to ensure our iDRAC is sitting at the very least version 1.50.
Since my iDrac is below 1.5 I’ll need to update it first before going any further – to do so access the actual iDrac web gui (do not try and update through the CMC, it’s a nightmare). Within the iDRAC web interface select ‘iDRAC Firmware Update’ and upload your payload (.d7) file. You can find this file in the payload directory after extracting the Windows executable that is downloaded from the Dell Website. Once the file is uploaded simply click ‘Next’ and wait for the update to report success.
Once it has complete you will need to give this a good 5-10 minutes before the iDRAC will become ready again! Just relax for a spell! Oh, and repeat on all of the blades that you want to update moving forward…
Now that we have our iDRACs up to a version that will support updating from network sharing and a repository created on a network share we can proceed with the rest of the updates in the order we specified at the beginning of this post, which means next up is the BIOS. To rescan for any updated files on our repository click the ‘Check for Updates’ button after selecting ‘Update from Network Share’.
As we can see above we now have a nice listing of those items within our blades that can be and have updates within the repository. At this point it’s as simple as clicking ‘Update’ (at the bottom of the listing) and letting Dell do the heavy lifting. This should take care of updating the firmware of items such as mez cards, storage controllers, BIOS, backplanes, etc…
After you have updated all you can with the repository network share option we need to move on to a few items that are outstanding, with the first being the CPLD. This update, just like the iDRAC update is done from within the iDRAC interface (under Update and Rollback). Simply upload the 64 bit update package for Windows (yes, for windows even if you are dealing with ESXi), select it and click ‘Install’.
After a quick reboot CPLD should now be good to go and we can continue on…
The only item left within my M620 was the SD module in which I have ESXi installed on…In order to update this we need to use a process that is completely different than what we have been doing.. Gotta love Dell. Anyways, the Internal SD module updates come in the form of a live cd – download the ISO, mount it to your blade and boot to it.
The update itself auto runs and should reboot once its complete! And that my friends is a day or two of complete chaos updating an m1000e with a few blades in it! I really wish that these big vendors could get all of these updates consolidated into one process as it’s really a pain jumping back and forth from method to method. I’m not just picking on Dell here, the others are just as bad (good?).
Anyways, hopefully this helps someone with the process of moving through a slew of updates to their chassis! Also, if you know of any other updates I may of missed, or a method that may simplify things for sure let me know.
If you’ve ever tried to tackle all the firmware on an m1000e/VRTX and its’ respective blades you probably know what a hot mess it can be – using various different methods to update different pieces of hardware, some requiring a Live CD, some requiring a boot-able USB key, some requiring you to extract an EXE and find secret payload files, and some being installed through the GUI. It’s a full time job just to keep track of all the different pieces of firmware and how they are installed. Now in order to help minimize this Dell has introduced the Dell Repository Manager – an online repository that will fetch those updates that you need and serve them up to your CMC controllers for installation. The CMC can then go and fetch these firmware updates from DRM and apply them in an ordered, staged, and automated fashion!
Oh, it sounds so picture perfect doesn’t it! The fact of the matter is though if you have ever tried to work with any of these update/firmware management products, be it Dell, HP or anyone else you know that they are not as intuitive and easy to use as they advertise – that and they are constantly being updated and best practices are constantly changing! It’s a moving target for sure! That said taking the time to set it up properly still far outweighs the pain of having to hit the Dell support site, pulling down individual firmware packages and processing them manually – so why not spend the time now which will hopefully save you some time later – I’m lazy by nature, and have followed the following steps to make it work for me!
Install and Configure Dell Repository Manager
First up we need to setup the Dell Repository Manager (DRM) – you should be able to find the downloadable msi under the Systems Management portion of any of your supported Dell products on their driver download page. The install itself requires just a few clicks of the ‘Next’
There is little configuration to do in order to get the DRM functional. Basically we just need to sync the Dell online database with our local install of DRM. To do so, select ‘Source->View Dell Online Catalog’. In the dialog box shown simply click ‘Yes’ to update your database.
After a few minutes of ‘Reading Catalog’ and ‘Writing data to database’ we should be good to go to continue with the creation of our repository.
Creating your m1000e repository
Now it is time to create a new repository which will pull down the updates for the hardware existing within our m1000e. In order to do this we will need to export the inventory of our CMC to a file to be used to import into the DRM. To do this head to the Update tab within the CMC interface (Chassis Overview->Server Overview->Update). Select ‘Update from Network Share’ under Chose Update Type and then click ‘Save Inventory Report’.
Doing this should save a file (Inventory.xml) to your local harddrive – this file contains the inventory of the blades and what is inside of them in terms of hardware and needs to be copied over to your DRM server. Now we can proceed to create a new repository based off of our Inventory.xml file as shown below…
Within DRM select Repository->New->Dell Modular Chassis Inventory. Give your repository a proper name and description.
Select ‘Dell Online Catalog as our base repository.
Point to the location where you have copied the Inventory.xml file and ensure that ‘Latest Updates for all devices’ is selected.
On the Summary screen ensure that all of the OS components are selected. This just ensures that no matter what OS we have on the blades (Linux, Windows, ESXi) we will get the proper firmware packages needed to deploy.
After a few minutes we should be redirected back to our main screen of DRM with the focus on our newly created repository. The next thing we need to do is to export this repository into some sort of deployable format that can be consumed by our servers and chassis. To do so, make sure that all of the bundles listed are checked and select ‘Create Deployment Tools’ in the top right hand corner.
Here is where we determine what type of deployement tool to create – you can see we can create a bootable ISO, a SUU, etc. Since we will be installing from a network share we need to create a catalog, so select ‘Create Custom Catalog and Save Updates’ and continue.
Provide a path as to where to store your repository, catalog, and updates and be sure to select to generate the ‘Full Repository’ as we will need both the catalog.xml file and the updates themselves – and ‘Next’.
Once completed the job gets submitted into the Job Queue and can take quite some time as it is pulling down all of the updates. You can monitor this by browsing the queue at the bottom of the screen. When it’s all said and done you should see a number of folders and the catalog.xml file in your specified location Just a note here, if you don’t see Catalog.xml I’ve had a few instances where I needed to re-run this process, selecting only to export the catalog file – then, re-running the complete process again selecting the full repository – told you it was a hot mess! Anyways, after you are done go ahead and setup a windows share somewhere on this system – doesn’t matter where it is, so long as you can browse to this folder using it.
Setup the CMC
At this point we are through with the DRM and need to go back to our CMC in order to create our network share. This is done in the same location that we exported our inventory (Server Overview->Update), selecting ‘Update from Network Share’ as our Update Type and ‘Edit’ to setup our connection to our newly created CIFS share.
Enter in the information that pertains to your share using ‘CIFS’ as the protocol. You will need the IP address of your DRM server, the share name that you have setup, any further directories underneath the share if applicable, the name of the catalog (always Catalog.xml unless you specified otherwise) as well as the proper domain and credentials to connect. To test your connection to the server select ‘Apply’ and then ‘Test Network Connection’ Once successful click ‘Back to return to our update screen.
At this point we should be able to simply click ‘Check for Updates’ and have the CMC query our DRM for any updates available and display them if so.
And voila – you can now select whether you would like to reboot now to apply updates or wait until the next reboot and kick things off by clicking ‘Update’! Easy peasy right? Not really – but at least this should help save some time….
A few troubleshooting tips to watch for
No Server is Ready for Update message
If this is displayed next to your Network Share information than the first thing I would check is the version of your iDRAC on your blades. In order to update from a network share your iDRAC must be at version 1.5 or higher, so if you are lower update it! As for how to do that, the easiest way I’ve found to do so on a blade running ESXi is to enter the individual iDRAC web gui for a given blade, and browse to the Update section under iDRAC Settings. This will look for a file and it’s always a crap shoot as to where that file is or which package to download depending on the current version of iDRAC you are on. Since this most likely an older version of iDRAC (below 1.5) then you will most likely need the .d7 file. Download the EXE for your server labeled iDRAC with Lifecycle controller and extract the files within it to a folder – inside this folder you should see a payload directory. The file within that (firmimg.d7) is the file you will need to upload in order to update your iDRAC. After updating there will be a brief iDRAC outage as it reloads – when back up try ‘Checking for updates’ again on the CMC and it should now work.
Cannot check for updates message
This message is displayed when there is no catalog.xml file located in your exported CIFS repository. Check to see if it is there – if it isn’t, as mentioned earlier, re-run the Create Deployment Tools process and point to the same location, selecting Catalog file only. Once that has completed start the Create Deployment Toools process again pointing to the same location, selecting Full Repository. Check to make sure the timestamp on your catalog.xml file is updated.
Caution icon next to repository progress
This generally means that you have some updates that require confirmation to download. Simply double click the job in the job queue, click ‘Confirmation Needed’, and click ‘Accept’.
Any other possible issue and error
Virtualization Field Day 4 is right around the corner, taking place January 14-16 in Austin, Texas! Now, I’m trying to be smart and perform due diligence to the sponsors by doing a little pre-blogging and research on all of the vendors that will be participating. There are a total of 8 sponsors, and honestly, without them, Virtualization Field Day would be pretty boring. They are the ones that derive up the content and spark up the great conversations that happen over the three days both at the event and over Twitter.
Some of the sponsors I’m very familiar with, however there are some, that I’ve simply just heard of (thus my need to research). So, without further ado, here’s a small glimpse into 4 of the the 8 wonderful sponsors making #VFD4 possible. Watch for a post with the remaining sponsors soon!
I had always been under the impression that CommVault was somewhat of a newer company, not quite “startup” status, but I had always just assumed they were a decade or so old – I couldn’t be more wrong. CommVault had actually been formed as development group within Bell Labs a way back in 1988. To put 1988 into perspective, well, I was 10 and Edmonton Oilers won the Stanley Cup – I have a son who will soon go through the first situation himself, the second, well, I don’t see Lord Stanley in oil country any time soon 🙂 Yeah, so back to CommVault. 1988 places CommVault at the 17 year mark – certainly a long period of time for a software company to be around. The magic sauce at CommVault is their flagship product – Simpana. Simpana dubs itself to be “A Single Platform to Protect, Manage, and Access all of your companies information” and does so by providing customers with one code base, one product that handles all of your backup, replication, archive, and recovery needs. And by all they truly mean all – Simpana has support for physical servers, virtual servers (VMware and Hyper-V), desktops, laptops and even support for backing up and archiving individual application items such SQL databases and Exchange emails. One driving feature behind Simpana that sparks my interest is the ability to “migrate” or backup/restore to/from VMware, Hyper-V, vCloud, etc – this can definitely give customers options in terms of disaster recovery. Simpana has a ton of features, too many to go over on a small intro blurb so check them out yourself – I can’t wait to see what CommVault has to offer for VFD4 and you can bet that I’ll summarize it as best I can here.
This was one of those lesser known companies that I’ve never heard of and had to do a little research on (Sorry – that was really a bad joke 🙂 ). This will be Dell’s first time presenting at a Virtualization Field Day – they have certainly been present at Networking Field Day, Storage Field Day as well as a couple Tech Field Days but this is their first go at a VFD. Dell is a huge player in terms of virtualization and the company covers almost, if not all components of a virtual data center – meaning they sell servers, networking, storage, software and services all relating to virtualization (server, desktop, networking, storage) technologies. With that said, the mystery of what component Dell is going to present on is definitely making Dell number 1 on my most anticipated sessions. The fact that the Dell sessions will be held deep within the guts of their Austin headquarters is also a pretty awesome perk.
Less than 6 months after coming out of stealth Platform9 is positioned to give their first ever appearance at a Tech Field Day event on Wednesday afternoon. From what I can tell Platform9’s goal is really to take the simplicity, agility, and convenience of the public cloud and apply that to a customers on site, local hardware – kind of like, yup, you guessed it – a private cloud. There is a lot of competition in this space – and to tell you the truth, I personally don’t have a whole lot of experience with either Platform9 or their competitors, so I can only go on what I see on their website and what I have heard others say – this will change come next week during VFD4. Currently Platform9 supports KVM, vSphere, and Docker – whether or not this will be expanded I have no idea. The highlights I have noticed are the fact that the Platform9 management layer is somewhat hypervisor/technology agnostic, meaning vSphere/KVM VMs, along with Docker containers are all treated as what is called an instance within Platform9, and it’s 100% cloud managed, delivered in a SaaS model. All of this, built on OpenStack – which could be a huge +1 on their part if they have simplified this enough. Again, I can’t find very much in terms of demos or videos out there, so I’m very excited to see what Platform9 has to offer come next Wednesday.
Scale Computing has been around since 2009 which makes them somewhat of a veteran in terms of converged infrastructure – but the fact is, unlike Simplivity and Nutanix, Scale started out on a different path. They broke into the IT world by shipping scale-out storage – NAS/SAN models which were targeted toward the SMB market to help drive companies cost of storage down while providing a very simple, easy to use storage solution. This all changed in 2012 when Scale announced the availability of HC3 – a scale-able, clustered hyper-converged node architecture that included their core storage, but now with compute and virtualization thrown into the mix. By 2012 Nutanix already had a piece of hardware shipping, with Simplivity not long to follow the year after – but their seems to be a few things that differentiate the HC3. Perhaps the biggest being target audience – Scale has always, and is still very focused on the SMB market, which means price is one of the major differences. In order to drive down price, Scale developed their very own fork of KVM, meaning their offering comes complete – no need for VMware or any other hypervisor/management licensing. The HC3 peaks my interest as SMB has a lot of potential with virtualization – a lot of small companies just getting started or still exploring virtualization options. I’ve not explored HC3 in enough detail to see if they have what it takes in terms of benefits and features to become a viable player – so I’m very anxious to see what they have to offer at VFD4.
And then there were four…
In efforts to keep this post small enough for someone to read (and to give me a break from writing) I think I’ll take the remaining four VFD4 sponsors (Simplivity, SolarWinds, StorMagic and VMTurbo) and place them in a second post. Watch for that sometime soon!
Virtualization Field Day 4 kicks off next Wednesday, January 14th – As always, it is always live-streamed so you too can join into the action by watching live stream one the VFD4 landing page and participate in the conversations via Twitter using the hashtag #VFD4. Can’t wait!
Disclaimer: As a delegate all of my flight, travel, accommodations, eats, and drinks are paid for. However I do not receive any compensation nor am I required to write anything in regards to the event or the sponsors. This is done at my own discretion.
A colleague of mine, one whom was attempting to troubleshoot some issues with Dell support was asking about the possiblity of gathering a DSET report on one of our hosts. DSET, or the Dell Server E-Support Tool is used to gather hardware, storage, and OS support information which consolidates into a single zip file which is used by support to troubleshoot and inventory your Dell Poweredge servers. Needless to say, DSET was pretty similar in the Windows/Physical world – simply install on the local OS, run the command and you are done.
In ESXi this becomes a little trickier. In fact, after reading up on some documentation I was somewhat reluctant, as it requires that the Dell OpenManageServer Administration bundle be installed on your host. In the past I’ve found myself fighting with Dell OpenManage and Server Administrator bundles as well as their remote counterparts. Seeing that only certain versions work with certain ESXi releases, and having to match up versioning numbers exactly to make things function properly. That, and the fact that every time I seem to hit Dell’s support site there are new releases really make things, well, let’s say troublesome (or annoying).
Nonetheless I gave it a shot and after enough experimentation I found a combination that worked – so, in case you’re having the same issues maybe this will help.
First up, OpenManage
So, first we need to install the OpenManageServer Administrator Bundle version 7.4 – you can find that located here. Go ahead and download the zip file and extract to /var/log/vmware on your host. Yes, the package will look for that specific path so you will need to be sure it is in /var/log/vmware. From there we can simply install the vib with the following command
Next – DSET
The version of DSET that we will install is 3.6. The installation for DSET is the standard Next Next type of install – so I won’t go over much of that – just be sure to select both the CIM provider and the collector. You can find it here. Once done you are good to go. Launch a command shell (as administrator) and browse to c:\Program Files(x86)\Dell\AdvDiags\DSET\bin and run the DellSystemInfo.exe command with your desired parameters (example below)
There you go! Your Dell DSET log that you can now forward off to support to get your issues looked after. This certainly isn’t a very difficult thing to do but troublesome nonetheless trying to match up versions to make things work. Anyways, hope it helps with anyone having issues.
Last month I published a post in regards to the Dell VRTX, ESXi 5.5, and storage – or the lack thereof. Well shortly after publishing that article Dell announced full support for ESXi 5.5 and released an ESXi 5.5 image on their website for those looking to upgrade or install. In the chance that my little driver work around might affect support I decided I’d better pull the image down and get it installed on the few VRTX’s I had already deployed.
That said, looking at the build numbers and attempting to not have to redo all of the configuration I had already applied, I decided to take the upgrade route – even though the only difference was most likely the storage driver. The upgrade process itself went smooth – no issues, no problems. But after it was complete guess what was missing? Yup, the datastore was gone again!
Now this wasn’t the same issue of missing storage that I described the last post. Previously I couldn’t see the storage at all, this time, when looking at my Storage Adapters I could actually see the device that hosted my datastore listed. So it was off to the CLI to see if I could get a little more information about what was going on.
To the CLI Batman!
After doing some poking around I discovered that the volume was being detected as a snapshot/replica. Why did this happen? I have no idea – maybe its the fact that I was messing around with the storage drivers 🙂 I guess that’s why they say things are supported and unsupported 🙂 Either way, how I found this out was with the following command.
esxcli storage vmfs snapshot list
This command actually displayed the volume that I was looking for, and more specifically showed that it was mountable. So my next step was to actually mount that volume again. Take caution here if you are doing the same. I know for sure that this volume is the actual volume I’m looking for – but if you have an environment with lots of lun snapshots/replica’s you will want to ensure that you never mount duplicate volumes with the same signature – strange things can happen. Anyways, to mount the volume, take note of the VMFS UUID and we can use the following command.
esxcli storage vmfs snapshot mount -u 5315e865-0263a58f-413a-18a99b8c1ace
And with that you should now have your Dell VRTX storage back online – everyone is happy and getting along once again – Thanks for reading!