Monthly Archives: October 2015
Back in July when I was selected to be part of the Veeam Vanguard program I wasn’t quite sure what to expect. Sure, having been part of other influencer programs I assumed the basics – swag, early briefings, access to product teams, etc. But after spending a day at VeeamON 2015 with my fellow Vanguards that made the trip I’m realizing that it’s much much more than just an influencer program, but more of an opportunity to learn and explore other technologies that I may not have on other terms…
A bit about the Vanguard day
There were a lot of things happening at VeeamON during the Monday of the conference. There was a partner track happening, the VMCE training was going strong, but most importantly to me was the Vanguard day. For 6 or so hours the Vanguards gathered in a room to talk about everything Veeam. We got an overview of the program, why it was started, where Veeam sees it going and not to mention we got a lot of cool swag (means a lot to me as I packed really light and needed some clothes . Unfortunately most of what we spoke of was under strict NDA and embargoed till a later date – I can’t remember which was what so I’ll just take the safe road and not mention any of it! Aside from the information obtained I began to see another benefit, another advantage to being a Vanguard – one which didn’t even occur to me…
Hyper-V – The dark side of the moon.
The Vanguard program is somewhat unique in terms of the knowledge and skillset of the members it contains. Obviously we all share the same passion and love for the Veeam products we use, and evangelize them in some sort of fashion, but we do so by running these said products against different hypervisors. Instead of it being a shear vSphere/Hyper-V smackdown I’ve found it to be quite a learning experience instead. I’ve had numerous conversation with the Hyper-V experts that are in the Vanguard over the last few days and it’s interesting to hear them share their struggles, problems, and workarounds from the Microsoft side – which tends to have a lot of overlap in the vSphere world.
The Vanguard program to me is more than just swag and recognition for being an industry influencer – but it’s a program full of opportunity – opportunity to see what’s on the other side of the hypervisor fence, opportunity to meat and get to know new people, new experts from from the Microsoft world that I would normally not see on my travels, and opportunity to help shape and influence a product that both Hyper-V and vSphere customers depend on everyday – because as more and more time passes, at the end of the day we are seeing more and more attention being focused on the applications, rather than on the hypervisors sitting underneath these said apps. And who knows, maybe some day I’ll need to reach out to the Hyper-V guys for some help…
Also huge props to the Veeam All-stars that are running this program and organized the day for us – they will be some familiar names – Rick Vanover, Luca Dell’Oca, Clint Wyckoff and Mike Resseler – if not for the effort and determination these guys put in the day would never have been what it was. Speaking of these guys, if you have a chance to view the opening general session of VeeamON do it – these guys were involved in showing all of the v9 features and did a great job!
Yesterday during the opening general session at VeeamON CEO Ratmir Timashev announced that Veeam is set to release yet another free tool into their backup arsenal, Veeam Backup for Linux. Roughly one year ago at Veeams inaugural conference he took the stage across the street and surprised most with their Endpoint Backup solution, allowing organizations to backup their physical desktops, laptops, and servers, so long as they were Windows based. Veeam Backup for Linux fills a hole that was left with Endpoint Backup, the millions of physical and cloud based Linux distributions that are out there “running the world”. And just like Veeam Endpoint Backup, Veeam Backup for Linux comes in at the low low price of FREE!
What we know so far
Veeam Backup for Linux works by placing an agent into your OS – Distribution wise we are limited to just Debian and Redhat based builds right now, with more to come as Veeam get’s set to go GA with the product. From there Veeam has built a proprietary Change Block Tracking (CBT) driver within their agent, allowing them to perform their ever popular incremental backups just as they would with any other Windows device utilizing VMware’s CBT capabilities. As far as snapshots go Veeam has gone custom on that play as well – although they fully support LVM capabilities, they’ve opted to use their own snapshot mechanism, which can perform at the block level and store the snapshot data on your actual backup device, rather than utilizing LVM snapshots which would essentially place your snapshot data on the same volume you would be backing up. All of this being provided in a consistent state, just as VSS quiesces your Windows servers.
As you can see above the UI is actually quite slick. We know most Linux installs, be they cloud based or not are deployed in a headless, bare boned type manner – with the vast majority of them not containing and desktop manager solution such as gnome. Having the product report back process in an organized manner through just the command line is a nice feature of Veeam Backup for Linux.
Veeam Backup for Linux also fully integrates with Veeam Backup and Replication – allowing us to backup to our existing backup repositories – meaning we can take advantage of all of the goodness that VBR provides to us. Things like encrypting our backup files, copying them off-site, setting up notifications, etc. – all of this can be done within the VBR console on our Veeam Backup for Linux backups.
Where do we go from here?
So where does this leave us? What are Veeams plans for VBL – This I don’t know but I’m always willing to speculate Personally I’d love to see Veeam Backup for Linux and Veeam Endpoint Backup have a tighter integration with Veeam Backup and Replication. Central management for jobs, remote deployments, etc. I’d love to see all of this functionality move into a paid version of VBR – I don’t think I’m alone with this either.
Secondly, and believe me this is just purely my speculation but I think Veeam Backup for Linux may open the doors for something more. As they have put resources into building the ‘data mover’ technology which can run on Linux I’d love to see them port that into Veeam Backup and Replication in the form of proxies. Currently we have to deploy Windows based proxies in order to scale our Veeam environment – in most cases this isn’t a big deal – but since version 3 I’ve been looking for Veeam to release a small Linux based proxy that we can utilize, something with a smaller footprint and without the costs of Microsoft licensing. Maybe Veeam Backup for Linux contains some base code that Veeam can utilize for this – again, this is purely just my speculation and honestly I’ve only had one coffee this morning so I may be a little crazy.
With all that said Veeam Backup for Linux is not yet available – it’s slotted for a release sometime in the first half of 2016! If you’d like, you can sign up for a first come first serve beta version here!
Have you ever tried to update the firmware of a blade chassis full of blades from start to finish? It’s an absolute pain! There are so many different components and pieces that need attention and there is about five different processes for appying updates – some working some way and other requiring a completely different process. Honestly, it’s a hot mess. Below is a method that can be performed mostly from within the GUI of the CMC and iDRACs – there are other ways of doing so by creating a bootable USB drive however I wanted to try and get the process down without using a USB key at all – and this is the best I could come up with.
Keep in mind we will be using the Dell Repository Manager to perform some of these updates, so if you don’t have that setup for your m1000e or VRTX go ahead and check out this post which explains exactly how to export your inventory and create a CIFS share to pull your firmware from. For the sake of this article I’ve applied the updates in the following order…
- CMC Firmware
- IOMINF Firmware
- iKVM Firmware
- Update iDRAC to version supporting network share updating.
- Apply any updates from the Dell Repository Manager (BIOS, LifeCycle, iDRAC, PERC, Network, Mez cards, Storage Controllers, Drive Firmware, etc…)
- Blade CPLD
- Remaining components in blades.
So with all this said let’s just get started. First up we need to update the CMC on the m1000e. This is a relativley easy process and is driven by navigating to Chassis Overview->Update on the m1000e.
You can see that the CMC is redundant, with and active and a standby component. First update the standby by selecting cmc-standby and clicking ‘Apply CMC Update’. Browse to your firmware image (ending in .cmc) and click ‘Begin Firmware Update’
This may take several minutes and depending on your initial version of CMC you may only see a simply ‘Transferring image’ message – either way just let it be and wait for it! Once the component has been upgraded, again depending on your initial CMC version you may receive a ‘Done’ status or you may be forced to log into the CMC again. You can then check that the standby firmware has indeed been updated by navigating back to the update page. At this point we can switch our active and standby firmware in order to boot to the new firmware. This is done by navigating to Chassis Overview->Troubleshooting->Reset Components and clicking the ‘Reset/Failover CMC’ button. After a little time your CMC will reboot and come back up with the newest firmware now sitting in cmc-active and the older version in cmc-standby.
At this point you can either leave your old firmware in cmc-standby in case something goes horribly wrong or you can proceed to update it as well using the same process as above. I normally go ahead and update it the second CMC in order to bring my chassis back into full redundancy but make that decision for yourself and once you do we are done with the CMC.
Next on the list to tackle is the IOMINF firmware. IOMINF is a funny one as it may or may not be visible depending on the version of the CMC you are on. This piece of hardware is essentially a bridge between the CMC and the IOM Device modules and is important to update as well. The firmware for the IOMINF is actually included within the CMC firmware package that we just installed – so if there is an update for it you will see them listed as updatable devices, if there isn’t a firmware update available, you won’t even see them listed. Kind of an odd design as I like to see everything that’s updatable within my system whether it’s up to date or not – odd! Either way you can go into the CLI and check out the IOMINF if you want to but rule of thumb is if it is listed in the GUI it needs an update, if it isn’t, you’re golden! As shown below I do indeed have an IOMINF update as it is showing within my list.
As for the update process you need to go through the listed components selecting one at a time and clicking ‘Apply IOM Update’, then ‘Begin Firmware Update’. This again will take a few minutes per component, and after each and every update you should see the respective IOM device disappear from your list. Go ahead and update each one that is listed, keeping in mind that you may lose connectivity to your environment during different updates depending on how you are configured, setup, etc…
Next is the iKVM switch – there hasn’t been any new release of this in a while, however I did see that the current release has been updated recently so for that reason, and for documentation purposes I’ll go over the install anyways. You will need to extract a bin file that is included in the .exe that is downloaded from Dell. The process of this is much the same as everything else, select your iKVM and click ‘Apply iKVM Update’, browse to your extracted .bin file and select ‘Begin Firmware Update’
Once the thrilling process of updating IOM’s, KVM and CMC’s has been completed it’s time to move on to the components that reside with the blades themselves.
We have a couple of options at this point – we could simply update each and every component one by one which could be time consuming and in my opinion just a tad bit maddening or we could use the Dell Repository Manager and connect our CMC to a prebuilt catalog and let it detect and install the updates. I would recommend doing the the latter if possible! Saves you the hassle of trying to find and download all the right files. If you don’t have Dell Repository Manager setup then have a look at this post where I explain how to get a repo setup specific to the inventory that you have within your m1000e. One of the prerequisites of using the CIFS/Repository method is that we need to ensure our iDRAC is sitting at the very least version 1.50.
Since my iDrac is below 1.5 I’ll need to update it first before going any further – to do so access the actual iDrac web gui (do not try and update through the CMC, it’s a nightmare). Within the iDRAC web interface select ‘iDRAC Firmware Update’ and upload your payload (.d7) file. You can find this file in the payload directory after extracting the Windows executable that is downloaded from the Dell Website. Once the file is uploaded simply click ‘Next’ and wait for the update to report success.
Once it has complete you will need to give this a good 5-10 minutes before the iDRAC will become ready again! Just relax for a spell! Oh, and repeat on all of the blades that you want to update moving forward…
Now that we have our iDRACs up to a version that will support updating from network sharing and a repository created on a network share we can proceed with the rest of the updates in the order we specified at the beginning of this post, which means next up is the BIOS. To rescan for any updated files on our repository click the ‘Check for Updates’ button after selecting ‘Update from Network Share’.
As we can see above we now have a nice listing of those items within our blades that can be and have updates within the repository. At this point it’s as simple as clicking ‘Update’ (at the bottom of the listing) and letting Dell do the heavy lifting. This should take care of updating the firmware of items such as mez cards, storage controllers, BIOS, backplanes, etc…
After you have updated all you can with the repository network share option we need to move on to a few items that are outstanding, with the first being the CPLD. This update, just like the iDRAC update is done from within the iDRAC interface (under Update and Rollback). Simply upload the 64 bit update package for Windows (yes, for windows even if you are dealing with ESXi), select it and click ‘Install’.
After a quick reboot CPLD should now be good to go and we can continue on…
The only item left within my M620 was the SD module in which I have ESXi installed on…In order to update this we need to use a process that is completely different than what we have been doing.. Gotta love Dell. Anyways, the Internal SD module updates come in the form of a live cd – download the ISO, mount it to your blade and boot to it.
The update itself auto runs and should reboot once its complete! And that my friends is a day or two of complete chaos updating an m1000e with a few blades in it! I really wish that these big vendors could get all of these updates consolidated into one process as it’s really a pain jumping back and forth from method to method. I’m not just picking on Dell here, the others are just as bad (good?).
Anyways, hopefully this helps someone with the process of moving through a slew of updates to their chassis! Also, if you know of any other updates I may of missed, or a method that may simplify things for sure let me know.
It became very clear to me sitting out there today that every decision I’ve made in my entire life has been wrong. My life is the complete opposite of everything I want it to be. Every instinct I have, in every aspect of life, be it something to wear, something to eat – it’s all been wrong. – George Costanza
Multibooting Veeam Endpoint USB
Veeam has a pretty nifty little product in their Endpoint Backup solution. Honestly just the other day I was wondering if it was still installed on my laptop as I haven’t even noticed it at all – sure enough there it was, quietly doing its’ thing. Anyways, Vladan Seget has a great article on his blog in regards to creating a multi-boot USB stick w/ the Veeam Endpoint Backup recovery ISO’s on it in order to support various different hardware and laptop flavors! Something to definitely check out if you manage multiple different hardware platforms and want to use VEB to protect them all!
2016 Vendor Community Awards
There are a ton of community award/recognition programs being run by vendors these days. It seems like almost everyone is trying to recognized the hard work that community leaders, bloggers, and evangalists alike are putting in to help spread the word about everything tech. That said, after seeing Andrea Mauro’s post about the programs upcoming for 2016 I realized I didn’t even know about all of them. If you are interested in applying for a program, or just want to know more about them head over to Andrea’s blog and check it out!
Automate the answering of questions!
There is nothing more enraging than writing a great big long automation script only to find the vSphere client sitting at a prompt waiting for you to answer some kind of stupid question! I’ve never been able to find a way to work around some of these issues but after seeing Luc Deken’s post in regards to answering the infamous CDROM unmount question I might have a push in the right direction! Anyone dealing with automation and PowerCLI should really be following Luc and certainly check out his blog!
Testing JSON Syntax!
When dealing with a lot of API calls, especially when trying to form your own body for one it can sometimes be a little monotenous trying to find an error or test certain JSON syntax that you may have created. Jonathan Medd has a great blog dealing with all that is PowerShell and his latest Quick Tip, Testing JSON Syntax walks us through a quick and easy way to make sure that none of our JSON is malformed – and if it is how to quickly find out where the problem lies!
While we are talking about automation!
Sometimes I wish I had more time to spend within the VMware Hands on Labs environment – there is a ton of cool technology up there available to play with absolutely free of charge! I’ve done a dozen or so labs in my time, mostly centering around newly released products or things that spark my interest. What I didn’t know of is that there are labs there centered around VMware’s development tools and their respective SDK’s. I really need to set aside a few hours to have a look at these as it’s something I struggle through everytime I attempt to utilize them!
PernixData and their new UI
I have always been a fan of products with a clean, crisp, usable UI. I like whitespace and I like intuition and when attending Tech Field Day presentations it’s always the first thing I see and really sets the stage for the whole presentation! I saw Pernix at VFD5 along with their newly redesigned UI and it did not dissapoint! Pete Koehler (@vmpete) has a great post on his blog covering almost everything there is to know about the new PernixData UI – why they went there, what it involves, and what are some of the goodies to really focus on! If you are a fan of Pernix, or simply a fan of creating beautiful interfaces check out Pete’s post!
Embedded to External, External to Embbeded – It’s all possible now!
For those that made the jump to vSphere 6 before Update 1 was released you may have noticed some odd and annoying limitations during the upgrade – the first being their was no “supported” way to upgrade directly from 5.5 embedded SSO to a 6.0 external PSC. You had to first break out your 5.5 SSO to another box and then proceed with the upgrade – it was just a big PIA to tell you the truth. Along with Update 1 came some tools that allow us to simply repoint and reconfigure our vCenter Servers to new PSC’s, which essentially allow us to perform the embedded upgrade and then repoint to a newly installed external PSC – a welcome addition! If you want to learn more there is a great post by Ryan Johnson on the vSphere blog outlining all of the scenarios and commands you need to run to repoint and reconfig!
Disclose all the things!
There are few things in this world that amuse me to the level that Justin Warren’s disclosure posts do and his latest in regards to VMworld 2015 does the trick just the same! Justin is a great writer and I follow his blog religiously – he has lots of excellent posts is a very smart man with an interesting take on everything – including disclosures! Justin spares no attention to detail in these types of posts with disclosures of food (” Some nice roast chicken and vegetables from somewhere local paired with Diet Coke. There was pie, but I had a cookie instead”) and schwag (“EVO:RAIL threw a cap at me, and I grabbed some stickers, one of which is on my laptop. The cap will go into the cap drawer because my wife says I’m not allowed to wear baseball caps.”) alike!!! Aside from these being incredibly amusing it does have an impact on you – just think of all the small, little things that you receive during a conference and how they may influence you!
If you’ve ever tried to tackle all the firmware on an m1000e/VRTX and its’ respective blades you probably know what a hot mess it can be – using various different methods to update different pieces of hardware, some requiring a Live CD, some requiring a boot-able USB key, some requiring you to extract an EXE and find secret payload files, and some being installed through the GUI. It’s a full time job just to keep track of all the different pieces of firmware and how they are installed. Now in order to help minimize this Dell has introduced the Dell Repository Manager – an online repository that will fetch those updates that you need and serve them up to your CMC controllers for installation. The CMC can then go and fetch these firmware updates from DRM and apply them in an ordered, staged, and automated fashion!
Oh, it sounds so picture perfect doesn’t it! The fact of the matter is though if you have ever tried to work with any of these update/firmware management products, be it Dell, HP or anyone else you know that they are not as intuitive and easy to use as they advertise – that and they are constantly being updated and best practices are constantly changing! It’s a moving target for sure! That said taking the time to set it up properly still far outweighs the pain of having to hit the Dell support site, pulling down individual firmware packages and processing them manually – so why not spend the time now which will hopefully save you some time later – I’m lazy by nature, and have followed the following steps to make it work for me!
Install and Configure Dell Repository Manager
First up we need to setup the Dell Repository Manager (DRM) – you should be able to find the downloadable msi under the Systems Management portion of any of your supported Dell products on their driver download page. The install itself requires just a few clicks of the ‘Next’
There is little configuration to do in order to get the DRM functional. Basically we just need to sync the Dell online database with our local install of DRM. To do so, select ‘Source->View Dell Online Catalog’. In the dialog box shown simply click ‘Yes’ to update your database.
After a few minutes of ‘Reading Catalog’ and ‘Writing data to database’ we should be good to go to continue with the creation of our repository.
Creating your m1000e repository
Now it is time to create a new repository which will pull down the updates for the hardware existing within our m1000e. In order to do this we will need to export the inventory of our CMC to a file to be used to import into the DRM. To do this head to the Update tab within the CMC interface (Chassis Overview->Server Overview->Update). Select ‘Update from Network Share’ under Chose Update Type and then click ‘Save Inventory Report’.
Doing this should save a file (Inventory.xml) to your local harddrive – this file contains the inventory of the blades and what is inside of them in terms of hardware and needs to be copied over to your DRM server. Now we can proceed to create a new repository based off of our Inventory.xml file as shown below…
Within DRM select Repository->New->Dell Modular Chassis Inventory. Give your repository a proper name and description.
Select ‘Dell Online Catalog as our base repository.
Point to the location where you have copied the Inventory.xml file and ensure that ‘Latest Updates for all devices’ is selected.
On the Summary screen ensure that all of the OS components are selected. This just ensures that no matter what OS we have on the blades (Linux, Windows, ESXi) we will get the proper firmware packages needed to deploy.
After a few minutes we should be redirected back to our main screen of DRM with the focus on our newly created repository. The next thing we need to do is to export this repository into some sort of deployable format that can be consumed by our servers and chassis. To do so, make sure that all of the bundles listed are checked and select ‘Create Deployment Tools’ in the top right hand corner.
Here is where we determine what type of deployement tool to create – you can see we can create a bootable ISO, a SUU, etc. Since we will be installing from a network share we need to create a catalog, so select ‘Create Custom Catalog and Save Updates’ and continue.
Provide a path as to where to store your repository, catalog, and updates and be sure to select to generate the ‘Full Repository’ as we will need both the catalog.xml file and the updates themselves – and ‘Next’.
Once completed the job gets submitted into the Job Queue and can take quite some time as it is pulling down all of the updates. You can monitor this by browsing the queue at the bottom of the screen. When it’s all said and done you should see a number of folders and the catalog.xml file in your specified location Just a note here, if you don’t see Catalog.xml I’ve had a few instances where I needed to re-run this process, selecting only to export the catalog file – then, re-running the complete process again selecting the full repository – told you it was a hot mess! Anyways, after you are done go ahead and setup a windows share somewhere on this system – doesn’t matter where it is, so long as you can browse to this folder using it.
Setup the CMC
At this point we are through with the DRM and need to go back to our CMC in order to create our network share. This is done in the same location that we exported our inventory (Server Overview->Update), selecting ‘Update from Network Share’ as our Update Type and ‘Edit’ to setup our connection to our newly created CIFS share.
Enter in the information that pertains to your share using ‘CIFS’ as the protocol. You will need the IP address of your DRM server, the share name that you have setup, any further directories underneath the share if applicable, the name of the catalog (always Catalog.xml unless you specified otherwise) as well as the proper domain and credentials to connect. To test your connection to the server select ‘Apply’ and then ‘Test Network Connection’ Once successful click ‘Back to return to our update screen.
At this point we should be able to simply click ‘Check for Updates’ and have the CMC query our DRM for any updates available and display them if so.
And voila – you can now select whether you would like to reboot now to apply updates or wait until the next reboot and kick things off by clicking ‘Update’! Easy peasy right? Not really – but at least this should help save some time….
A few troubleshooting tips to watch for
No Server is Ready for Update message
If this is displayed next to your Network Share information than the first thing I would check is the version of your iDRAC on your blades. In order to update from a network share your iDRAC must be at version 1.5 or higher, so if you are lower update it! As for how to do that, the easiest way I’ve found to do so on a blade running ESXi is to enter the individual iDRAC web gui for a given blade, and browse to the Update section under iDRAC Settings. This will look for a file and it’s always a crap shoot as to where that file is or which package to download depending on the current version of iDRAC you are on. Since this most likely an older version of iDRAC (below 1.5) then you will most likely need the .d7 file. Download the EXE for your server labeled iDRAC with Lifecycle controller and extract the files within it to a folder – inside this folder you should see a payload directory. The file within that (firmimg.d7) is the file you will need to upload in order to update your iDRAC. After updating there will be a brief iDRAC outage as it reloads – when back up try ‘Checking for updates’ again on the CMC and it should now work.
Cannot check for updates message
This message is displayed when there is no catalog.xml file located in your exported CIFS repository. Check to see if it is there – if it isn’t, as mentioned earlier, re-run the Create Deployment Tools process and point to the same location, selecting Catalog file only. Once that has completed start the Create Deployment Toools process again pointing to the same location, selecting Full Repository. Check to make sure the timestamp on your catalog.xml file is updated.
Caution icon next to repository progress
This generally means that you have some updates that require confirmation to download. Simply double click the job in the job queue, click ‘Confirmation Needed’, and click ‘Accept’.
Any other possible issue and error
There are many side effects of a root file system filling up – server halts, unexpected application crashes, slowness, midnight wake up calls, etc. And the root file system on the VCSA is no exception – in fact, I found it while trying to deploy a VM from a template into my environment – kept getting the dreaded 503 error that stated nothing useful to help with the resolution! But, after a little bit of investigative work it appeared to me that the root file system on my VCSA was nearly full! No keep in mind this was in my lab, and in all honesty you should probably investigate just why your file system is taking up so much space in the first place – but do to my impatience in getting my template deployed I decided to simply just grant a little more space to the root partition so it had a little room to breathe! And below is the process I followed – may be right, may be wrong – but it worked!
Step 1 – Make the disk bigger through the vSphere Client!
This is a no-brainer – if we don’t expand the space on the disk belonging to the VCSA that hosts the root partition before we can expand the root partition into that space! So go ahead and log in to vCenter (or better yet the host on which your VCSA runs) and expand it’s underlying disk
Once you have done this you may need to reboot your VCSA in order to get the newly expanded disk to show as expanded – I for one couldn’t find any solution that would rescan the disk within the VCSA to show the new space, but if you know, by all means let me know in the comments!!!
Step 2 – Rewrite the partition table
Things are about to get dicey here! We are going to use fdisk in order to recreate the partition tables for the root filesystem – so relax, be careful and take your time!!!
First up, let’s have a look at our disk by running “fdisk –l /dev/sda” As shown below we can see that it is no reporting at 25GB in size.
Next, we need to find the partition that our root filesystem resides on. The picture of the “df-h” output at the beginning of this post confirms we are running on /dev/sda3 – this is the filesystem we will be working with…
So listed below is a slew of fdisk commands and options that we need to run – also, you can see my complete output shown at below….
First up, delete partition number 3 using the d option.
1 2 3
fdisk /dev/sda d (for delete) 3 (for partition 3)
Now, let’s recreate the same partition with a new last sector – thankfully we don’t have to figure this out and should be fine utilizing all the defaults that fdisk provides…this time selecting the n option, p for partition, 3 for our partition number and accepting all of the defaults
1 2 3
n (for new) p (for partition) 3 (for partition number 3)
After accepting all the defaults we need to make this partition bootable – again done inside of fdisk by using ‘a’ and then ‘3’ for our partition number
a (to toggle bootable flag) 3 (for partition number 3)
As you can see in the message pictured above we need to perform a reboot in order for these newly created partition tables to take affect – so go ahead and reboot the VCSA.
Step 3 – Extend the filesystem
Well, the hard part is over and all we have left to do is resize the filesystem. This is a relatively easy step executed using the resize2fs command shown below
After this has complete a simple “df –h” should show that we now have the newly added space inside our root partition.
There may be other and better ways of doing this but this is the way I’ve chosen to go – honestly, it worked for me and I could now deploy my template so I’m happy! Anytime you are using fdisk be very careful to not “mess” things up – take one of those VMware snapshotty thingies before cowboying around Thanks for reading!