Tag Archives: Linux

Linux just got a whole lot Veeamier!

veeamlogoSome of the biggest news coming out of VeeamON 2015 was based around yet another new free product from Veeam – this time, tackling the Linux market with what was then dubbed to be called Veeam Backup for Linux.  You can read about my thoughts around the product when it was announced here – or all of my coverage from VeeamON here.  That said VeeamON is all in the past, and my predictions for Veeam Backup for Linux at the time were, well, let’s just say mostly incorrect 🙂

A new name – A new beta

Fast forward to present day and we now have a new name for the product – the newly renamed Veeam Agent for Linux has also been released into a public beta!  So if you fancy yourself some tab-completed, neck-bearded, command-line, bash-junky veeamy goodness you can go ahead and sign up for the beta yourself!  The bits will be handed out in a first come first-serve manner so I suggest you stop reading this and go ahead and sign up (and then come back of course :))

So what are we looking at?

To kick things off we have support for both Debian and RedHat based Linux distributions.  In true Veeam fashion of trying to make things as easy as possible there is now running any “make” or “install” craziness – the product simply comes shipped as a .deb OR .rpm packages depending on your preferred distribution.  As far as kernel support – so long as you are running 2.6.32 or higher you are good – which is a pretty hefty backport of kernel support for a new product in my opinion, supporting kernel releases back into late 2009!  Also supporting both 32 and 64 bit kernels I’d say we are well covered for a 1.0 beta!

Let’s give it a shot!

To kick things off I’ve setup an instance of Ubuntu server, running 14.04.  Now there are a number of prerequisites that we will need installed before we can successfully install Veeam Agent for Linux (VAL) – I’m not going to list them all here, you can see them in the VAL forum that has been created for the beta – think of the normal culprits like gcc, make, etc..  Now, you can go ahead and install these one by one – or you can simply do it the lazy way, by attempting to install the deb package, allowing it to fail, then running an apt-get -f install (shown below)


At this point you can go ahead and install the VeeamSnap package, then proceed to do the same excercise of “apt-get -f install” with the Veeam package itself to handle any prerequisites for it.

As far as where I’m going to place my backups for the purposes of this post it’s simply just sitting on a secondary drive that I have attached to my test VM – however you could, and most likely should attach some sort of NFS mount to get your backups off host.  In the future and I’m hoping not too long from now we will see some sort of Veeam Backup and Replication repository integration – similar to that we have seen with Veeam Endpoint for Windows.

Veeam have done an amazing job with the interface that they provide on Veeam Agent for Linux.  At times you might forget you are even on a command line as the UI is pretty advanced for bash 🙂  To kick things off and create our first job let’s go ahead and run “sudo veeam” as shown below and ‘C’ for configure…VeeamAL-configure

First things first – give the job a name.


As far as the backup mode goes I selected Volume Level – but if you are following along you see you have the option to also do the complete machine level or file-level as well.  Below I’ve selected my OS drive as my source.


Again, my destination is local and I’ve set it to maintain 14 restore points.


And, if you chose to do so you can set a schedule…


If you are a Veeam Backup and Replication user then you might be used to certain statistics when looking at running jobs – as you can see below, the bash UI that Veeam has created gives us the same look and feel as that of B&R – allowing us to see the Bottleneck and data statistics just as we do in B&R – this is a huge feature in my mind!


So with that we have successfully installed and configured the new Veeam Agent for Linux beta – also, we have created and successfully ran a backup job.  So far so good!  That said, backups are only half the battle – restorations is where its at!  So how do we restore files within the Veeam Agent for Linux?  First up, select your backup job and hit ‘R’.


Then, select the backup you wish to recover from (restore point) and hit enter – you should get a message stating that your backup has been mounted to /mnt/backup/ (as shown below)


From there you can simply exit the Veeam UI, navigate to /mnt/backup/ and restore by copying whatever files/directories you wish to wherever you wish…


Once you have completed restoring your files, simply go back into the Veeam Agent for Linux UI, select your Job Name and hit ‘U’ to unmount the backup from the machine.


So there you have it!  The first beta for the newly renamed Veeam Agent for Linux has arrived!  If you would like to help shape the product by providing feedback I’d recommend you go out and pull this beta down now and start playing around with it!    Thanks for reading!

A free backup for a free OS – Veeam Backup for Linux announced

veeamlogoYesterday during the opening general session at VeeamON CEO Ratmir Timashev announced that Veeam is set to release yet another free tool into their backup arsenal, Veeam Backup for Linux.  Roughly one year ago at Veeams inaugural conference he took the stage across the street and surprised most with their Endpoint Backup solution, allowing organizations to backup their physical desktops, laptops, and servers, so long as they were Windows based.  Veeam Backup for Linux fills a hole that was left with Endpoint Backup, the millions of physical and cloud based Linux distributions that are out there “running the world”.  And just like Veeam Endpoint Backup, Veeam Backup for Linux comes in at the low low price of FREE!

What we know so far

Veeam Backup for Linux works by placing an agent into your OS – Distribution wise we are limited to just Debian and Redhat based builds right now, with more to come as Veeam get’s set to go GA with the product.  From there Veeam has built a proprietary Change Block Tracking (CBT) driver within their agent, allowing them to perform their ever popular incremental backups just as they would with any other Windows device utilizing VMware’s CBT capabilities.  As far as snapshots go Veeam has gone custom on that play as well – although they fully support LVM capabilities, they’ve opted to use their own snapshot mechanism, which can perform at the block level and store the snapshot data on your actual backup device, rather than utilizing LVM snapshots which would essentially place your snapshot data on the same volume you would be backing up.  All of this being provided in a consistent state, just as VSS quiesces your Windows servers.


As you can see above the UI is actually quite slick.  We know most Linux installs, be they cloud based or not are deployed in a headless, bare boned type manner – with the vast majority of them not containing and desktop manager solution such as gnome.  Having the product report back process in an organized manner through just the command line is a nice feature of Veeam Backup for Linux.

Veeam Backup for Linux also fully integrates with Veeam Backup and Replication – allowing us to backup to our existing backup repositories – meaning we can take advantage of all of the goodness that VBR provides to us.  Things like encrypting our backup files, copying them off-site, setting up notifications, etc. – all of this can be done within the VBR console on our Veeam Backup for Linux backups.

Where do we go from here?

So where does this leave us?  What are Veeams plans for VBL – This I don’t know but I’m always willing to speculate Smile  Personally I’d love to see Veeam Backup for Linux and Veeam Endpoint Backup have a tighter integration with Veeam Backup and Replication.  Central management for jobs, remote deployments, etc.  I’d love to see all of this functionality move into a paid version of VBR – I don’t think I’m alone with this either.

Secondly, and believe me this is just purely my speculation but I think Veeam Backup for Linux may open the doors for something more.  As they have put resources into building the ‘data mover’ technology which can run on Linux I’d love to see them port that into Veeam Backup and Replication in the form of proxies.  Currently we have to deploy Windows based proxies in order to scale our Veeam environment – in most cases this isn’t a big deal – but since version 3 I’ve been looking for Veeam to release a small Linux based proxy that we can utilize, something with a smaller footprint and without the costs of Microsoft licensing.  Maybe Veeam Backup for Linux contains some base code that Veeam can utilize for this – again, this is purely just my speculation and honestly I’ve only had one coffee this morning so I may be a little crazy.

With all that said Veeam Backup for Linux is not yet available – it’s slotted for a release sometime in the first half of 2016!  If you’d like, you can sign up for a first come first serve beta version here!

Tuning Linux (Debian) in a vSphere VM – Part 3 – /dev/random

tuxcloudSo here we are – Part 3 and the final part of the Linux series.  The title of this part is /usr/bin/random because well the content will be indeed quite random.  I couldn't think of a way to classify the content of this post into a single category!  So get ready for a hodge podge of fixes, modifications and configurations that have nothing in common and no similarities whatsoever.  I apologize up front for the flow (or lack thereof) of this post but hopefully someone will find something useful in it.

What time is it?

Almost every OS is very dependent in having accurate time and use different hardware and software techniques to do so.  When something is virtualized this adds yet another layer in between the OS and the hardware and creates some challenges as it pertains to timekeeping.  I’m not going to go through a time keeping lesson – there’s a great VMware whitepaper here that goes very deep into the topic.  Instead I'll just try to summarize what I've learned over the years as it relates to Linux and timekeeping.  Depending on the Linux distribution and kernel version you are running you may need to add some boot parameters to either the grub or LILO menu in order to disable the High Precision Event Timer (HPET) and set the proper clock managers.  Most new releases of Linux distributions (within the last couple of years) don’t require any changes, but for instance, if you are running Debian 4.x you would need to append “divider=10 clocksource=acpi_pm” to your kernel boot line.  For a full list of available options have a look at KB1006427 from VMware.

The I/O Schedule

ESXi has its' own special sauce as it pertains to scheduling disk I/O – and so does Linux.  Linux has some different I/O schedulers built into the OS, such as NOOP (noop), Completely Fair Queuing (cfq), Anticipatory (anticipatory), and Deadline (deadline).  By default, as of kernel 2.6, most Linux distributions use CFQ as their I/O scheduler of choice – not really a problem in the physical world but as a guest OS can cause some performance degradation.  As stated earlier, ESXi has its' own I/O scheduling, so does it really make sense to schedule I/O at the guest OS level, and then at the hypervisor level?  Probably not!  That's why there is a VMware KB article that states to switch your I/O scheduler to noop or deadline.  Honesty I would switch to noop as it does nothing to optimize and disk I/O, which would allow the hypervisor to do its thing.  Here's how!

You can change the scheduler during runtime by echoing to the proper disk, IE for sda we would use

echo noop > /sys/block/sda/queue/scheduler

However, to permanently switch you need to add an elevator switch to your grub kernel entry in grub or your Linux entry in grub2.  The above will reset back to cfq on reboot.  To permanently do this your kernel entry in the grub menu should look similar to the following (menu.lst for Grub and grub.cfg for Grub2).  Below is Grub2.

linux   /boot/vmlinuz-2.6.32-5-686 root=UUID=b62a38cf-8917-484a-9a96-d5a74beb8d59 ro  quiet elevator=noop

Copy the floppy!

In Part 2 we went over how to completely get rid of the floppy drive.  Part of those instructions included blacklisting the floppy module inside of /etc/modprobe.d/ – well guess what?  There are a slew of other modules that are loaded by default that you will probably never use when running a virtualized instance of Linux.  Below is a list of modules that I often blacklist – Sure, there are the one off cases where you will need one or two of these modules loaded so just pick and chose for yourself…

floppy – Take a guess, yup you got it, the floppy drive 🙂

mptctl – This monitors your RAID configuration.  I don't normally RAID inside of my Linux guests so this is really not needed – also this will spam up your messages log quite a bit as well.

pcspkr, snd_pcm, snd_page_alloc, snd_timer, snd, soundcore – These all have to do with sound which I'm not even sure is possible.  Disable them!

coretemp – if you don't care how hot your vCPU's are running you are safe to disable this.  If you do care, then, well, I'm not sure what to tell you 🙂

parport, parport_pc – these have to do with your parrallel ports.  I've never used these and always blacklist them.

Virtual Consoles – Do you even use them?

If you ware wondering how can I even use them inside of vSphere – check out my post here.  If you don't use them, why leave them enabled?  Disabling them is pretty easy, just comment out the tty# lines in /etc/inittab – I always tend to leave the lines in the file and just place a '#' in front of the ones I don't need – below you can see an example of my initttab files.  As you can see I left one console activated.

1:2345:respawn:/sbin/getty 38400 tty1
2:23:respawn:/sbin/getty 38400 tty2
#3:23:respawn:/sbin/getty 38400 tty3
#4:23:respawn:/sbin/getty 38400 tty4
#5:23:respawn:/sbin/getty 38400 tty5
#6:23:respawn:/sbin/getty 38400 tty6

So that's really all I can think of that in terms of tips and tricks that I do when deploying a Linux guest on vSphere.  I'm sure there are many others.  Linux and vSphere both have a ton of advanced settings and are very flexible when it comes to tuning for performance so if you know of any that I've missed I'd love to hear about them, just leave them in the comments box below.  By no means am I a Linux expert and I'm always eager to learn 🙂  Thanks for reading!

Part 1 – Installation

Part 2 – Virtual Hardware

Part 3 – /usr/bin/random

Tuning Linux (Debian) in a vSphere VM – Part 2 – Virtual Hardware

tuxcloudAlright – so in Part 1 we covered LVM, partitioning and the basic installation of our VM.  Now its time to tackle a few key points as it pertains to virtual hardware.  Now there isn't a whole lot to cover here and honesty I'm just going to graze the surface on some of these topics.  You will find a bunch of links to some KB articles and white papers where you can go pretty deep on some of these topics so feel free to read them as well.  Again, leave all of your feedback and comments in the sections outlined below – it's a learning process for us all 🙂  So, let's get into it!

Use the virtualized hardware!

VMXNET3 right!  Hopefully by now you know of the advantages of using the VMXNET 3 adapter – most of those same advantages apply to Linux as well.  I’d definitely recommend using this.  Since one of the benefits of VMXNET is the RSS or Multiqueuing it makes sense to use it, however there are a few things to keep in mind when utilizing this with the VMXNET3 driver; Any modern Linux kernel will now have it’s own built in VMXNET3 driver/module and for the most part (I think by default) your VM will use it, even if you have VMware Tools installed. – and this is fine.  There are times however where you might want to use the driver which is included with VMware Tools.  If you are running an older Linux kernel then the version of VMXNET3 you have might not support RSS or Multiqueueing, therefore you might want to use the VMware version which does.  To do so you can run the following command to replace the kernel driver with the VMware provided driver from the VMware Tools install package.

./vmware-install.pl –clobber-kernel-modules=vmxnet3

Go head and check out KB 2020567 for a bit more information on this.   Just remember, newer kernel = do nothing, older kernel = run the command above.

Who wouldn't want to be paravirtual?

Hey sounds cool right!  I know I'd want to be paravirtual!  Well your virtual SCSI controller does too!  PVSCSI has been the recommended adapter of choice for a little while now and can provide you with a multitude of benefits such as higher throughput at a lower CPU cost!  That being said, you need to be sure you are on a supported OS list!  When it boils down to it, so long as you are running at least 2.6.33 of the Linux kernel and have the vmw_pvscsi driver you should be ok, but, for those that like to be officially supported, the list is here.

Manage that memory!

memoryIts always best practice to right-size your VMs.  Keeping an eye on what CPU and memory resources are being utilized, knowing your workloads and sizing it appropriately – a Linux VM is no different (Hey, we are VMs too you know).  There is one setting from within a Linux VM that can be changed and tweaked to help though – swappiness!  Yup I said it, swappiness!  As always, open source and Linux never fail to amuse me with their terms and names for things!  So swappiness is essentially an integer value that determines when to move memory pages from physical memory( which is really virtual now, just mapped to physical) to your local swap.  Sounds cool eh but there is a catch!  Linux will move pages to swap even if you have physical memory available and this is completely separated from the fact that vSphere will also do its' own memory swapping if need be.  To cut to the chase a high swappiness value indicates that processes are more likely to to be swapped, whereas a low value indicates the kernel will leave them be.  The default value I've seen the most is 60 – normally I like to change this to somewhere around 15-20.  This is done by executing the following command

Echo 15 > /proc/sys/vm/swappiness

This works great for the current running state but to persist across reboots you will need to add the "Vm.swappiness=15" somewhere inside of /etc/sysctl.conf

Useless Hardware – Get rid of it!

LinuxPerf-Part2-fd0errorDo you honestly ever plan on using that floppy disk?  Didn't think so – get rid of it!  This is a process I try and apply to all my VMs not just the Linux ones.  Unneeded hardware can tie up not only vSphere resources but your OS resources as well.  So in terms of floppy it's not just as simple as just removing it from the VMs settings – you will need to also disable it from the BIOS of the VM.  If not you will always see that crazy floppy error that flashes up when grub is booting (the one on the right).  And even further, remove and blacklist the floppy module and prevent it from even loading into the Linux OS.  If you perform an 'lsmod | grep floppy' you will see that even though you have removed it from the VM and disabled it in the BIOS, the module still loads.  To completely blacklist a module from loading simply add it to a blacklist file in /etc/modprobe.d/ – In the case of the floppy we can execute the following list of commands to remove all traces of that pesky disk!

Echo "blacklist floppy" | tee /etc/modprobe.d/blacklist-floppy.conf
Rmmod floppy
Update-initramfs -u

So, that's it for now!  As with anything you do in production be sure to test it first!  Try different swappiness values, keep an eye on your resource usage and adjust as needed!  If you have any other tips/tricks for a Linux VM please leave them in the comments section below!  I'd love to hear them!  Next up in this series we will look at a bunch of random things like timekeeping, the i/o scheduler, as well as some other modules that we probably don't require when running virtualized!  Stay tuned and thanks for reading!

Part 1 – Installation

Part 2 – Virtual Hardware

Part 3 – /usr/bin/random

Tuning Linux (Debian) in a vSphere VM – Part 1 – Installation

tuxcloudLAMP services, web servers, firewalls, proxies, load balancing; some of the many use cases that the Linux OS along with its' packages delivers.  In fact, you would probably be pretty hard up to walk into any major enterprise and not find at the very least one flavor of a Linux distro running live in production whether it be a physical appliance or a virtual machine.  The latter however is becoming more and more prevalent.  The evolution to a software defined data center is heavily reliant on having those edge services virtualized, those internal web servers and proxies virtualized and that is why I decided to share a little of what I do when virtualizing Linux, mainly Debian and the processes and practices I take to ensure I get the most out of the OS.  So welcome to Part 1 (Installation) of my 3 part series covering all that is virtualizing Linux!  Be sure to share any tips you might have as well in the comments section as you will see that I'm no guru when it comes to Linux – I know enough to get by – but let's make this a community effort.

LVM for your VM?

So first off let me reiterate that I'm no expert when it comes to LVM and/or Linux disk partitioning – in fact this is one of the reasons I decided to include the installation on this series – to learn more.  I had never even used LVM on a Linux VM before writing this post but I’m sure I will now.  You can see below you are presented with a few different options during the disk partitioning process.


Now before going to far into LVM let me first thank Tomi Hakala ( blog / twitter ) and Romain Decker ( blog / twitter ) for giving me a bit of an introduction on what and how LVM is.  I certainly recommend checking out Tomi’s post on LVM here as I’m probably just going to regurgitate a little of what he wrote.   

Whether you decide to use LVM or not really depends on your use case but from all of the reading and research I’ve done I kind of came to the conclusion of when I would use it and it solely depends on whether or not you plan on using separate partitions inside your guest.    Honestly, if you plan on using one disk only in the virtual machine and you are not worried about separate partitions filling up and taking down everything or you don’t have any plans on mounting different partitions with different mount options or file systems then LVM might not be for you.  But if you are worried about things like a single partition taking down the system, expanding partitions on the fly, or want to simply mount /tmp as noexec (explained later) then you might want to consider using LVM.    My normal setup usually has my OS and individual partitions like swap, var, and home on one disk, and then I add another to simply contain data for whatever application it is I’m running.  Therefore, since I’m going to setup some individual partitions and I will want to easily expand them on the fly I’m going to use LVM by selecting ‘Guided – use entire disk and setup LVM' – That being said, when/if I do add the second disk for data I would not use LVM on it since I could easily enough expand it with the native Linux (resize2fs) or gparted tools.

Why can't partitioning just be easy?


Now we are into the partition questions.  As I said before I plan on having separate partitions for /home, /usr, /var, and /tmp – Why?  Well, this way I can set a few different mount options and do it on a per partition basis – meaning, if this was say a web server would I really need to be executing binaries on /tmp – probably not, so by breaking it out into partitions I can mount /tmp with the noexec option to prevent that.  If it was one big partition this option wouldn’t be available to me.  Also, if I planned on selecting ‘All files in one partition’ then I probably would of chose to not use LVM as I could just as easily expand the disk with VMware and resize2fs.  Another advantage of using separate partitions, say an application goes squirrelly and fills up your system with logs in /var – with a /var being mounted separately it will only be that partition that is affected while the others remain functional.  In the case of having only one partition, the whole server would be affected.  So again, for these reasons I’m going to select ‘Separate /home, /usr, /var, and /tmp’.

As mentioned above you may want to mount different partitions with different options as they are laid out in /etc/fstab – Now there are a lot of options (all here) when mounting partitions – too many to go over in this post but here’s a few to get your started

Noatime – The linux file system will keep track of a ton of different things as it pertains to time and an files (creation, modified, accessed).  Certainly when it has to update the access time every single time a file is accessed you can understand that it will generate an extra I/O.  If you don’t need this functionality, mount the partition with noatime and it will not occur.

Noexec/Exec – Disabled the execution of binaries on the specified file system.  Usually done on a /tmp mount in order to harden security.

Sync/Async – determines whether IO is written to disk in a synchronous or asynchronous fashion.

Remember: Any changes to the options in /etc/fstab will not occur until you have either rebooted or remounted the file system in question.

Swap Shop!


So now we see an overview of how Debian has determined the best way to lay out our partitions.  For the most part I normally just accept these defaults with one exception.  You may want to increase the size of the swap partition.  There are a lot of recommendations that somewhat conflict with each other around this subject.  It used to be that swap should be at least twice the size as the amount of physical memory (IE 256MB RAM would result in 512MB swap partition).   I tend to let this slide a bit but when I can I try to at the very least set the size of the swap partition to the same as the VMkernel swap file, which is the total amount of memory assigned to the VM minus the VMs reservation. (IE – a 4GB VM with a reservation of 3GB would result in a swap file totaling 1GB.)  Best advice, just pay attention to the amount of in-guest swapping that is occurring as well as the amount, if any, of ballooning (if you are heavily over committed) and you can always resize down the road.

The rest of the installation is pretty much your standard Linux install so I’ll leave that up to you…

Watch for Part 2 when we talk about virtual hardware and VMware Tools!  Also again, please leave any suggestions or comments below – they are always appreciated!  

Part 1 – Installation

Part 2 – Virtual Hardware

Part 3 – /usr/bin/random

setfacl – Quit changing Linux permissions and allow access to more than just root!

As much as I love working inside of the vSphere client and focusing solely on VMware and virtualization I’m also tasked with a lot of web programming, database development and general server administration.  I look after a few Debian servers which provide an external presence and the web developers working on them often require access to upload and change files and folders within the webroot.  Now being security minded I don’t want to just hand out our root password all the time so having them connect as root is out of the question as it’s disabled.

Historically the process has involved changing the various folders owner to the webadmin account, thus allowing them to connect and do what they need to do as webadmin.  This has always worked great but poses some challenges especially when using certain CMS applications such as WordPress and Joomla.  When installing new plugins and modules these applications tend to create their new folder structures and set the owner to www-data – kind of a pain in the @$$ as now the webadmin account has just lost access to write to the directory.  Again, this usually resulted in myself or someone being summoned upon to change the owner again!  So the solution, a little bit of ACL awesomeness…  getfacl and setfacl in a nutshell allow you to specify multiple types of access to more than just owner and group on Linux files and folders – perfect for my scenario as now I can leave root as the owner for security purposes, www-data as the group in order to actually let the internet display the sites, and add an acl to webadmin in order to allow them read/write access to do their job.  Below is a pretty short example on how to get started.

First off you need to have support in your kernel, which honestly you probably do but incase you feel like checking just do the following on your boot config.

cat /boot/config-kernelversion | grep _ACL

This should generate something along the lines of a CONFIG_EXT3_FS_POSIX_ACL=y

As with any Debian package installation it’s pretty easy..

apt-get install acl

Almost there, we know have acl installed and know it’s supported, we just need to be sure we mount the file system in which we want to provide acl’s on is mounted with acl support.  To do this you can simple add ‘,acl’ to your /etc/fstab file as shown below….

/dev/sdb1 /var/www/webroot  ext3 defaults,acl,errors=remount-ro 0 1

You may need to either reboot here or issue a remount command on your targeted drive in order to get things working.  After doing so adding an ACL is pretty simple.  You can check out the man pages for more in-depth documentation but to get myself up and running the following was sufficient…

setfacl -Rm u:webadmin:rwx /var/www/webroot

Voila!  Done!  Again this is a VERY introductory post dealing with Linux acl’s and permissions…and there are a lot more posts out there which go deeper into details but if you are looking to get up and running quickly this should do the trick!  Comments, Questions, Concerns – throw them in the comments box below…

Expanding a Linux disk with gparted (and getting swap out of the way)

Over the past year or so there have been a few times where I've need to expand a disk attached to a Debian system.  Now this isn't a hard task by any means, and may not even warrant a blog post, but the matter of it is that I always seem to forget the steps I need to take to get that unallocated space that I've added next to my actual EXT3 partition since the swap partition is always in the way!  So, I thought I would just throw up how I've done it in the past in hopes to maybe help a few others that visit, but more-so for myself and my memory (or lack there of).  Now keep in mind I'm sure there are ways to perform this exact same thing without taking the VM online, or I'm sure there are other 'better' ways to achieve the same results, but this way has worked for me consistently so it's what I chose to do.  Any other suggestions are certainly welcome in the comment box below.

First off, you will need to expand your drive from within the vSphere Client, not rocket science here, pretty simple to do.  Next, get yourself a copy of gparted and mount the ISO to your VM and reboot booting into the gparted interface (accept all defaults for keymap, X, and resolutions, unless of course you like playing…).  So the first thing you will notice inside gparted is that the swap partition is right smack in the middle of your EXT3 partition and the unallocated space. Normally, you could just resize the EXT3 partition and consume the unallocated space, but with swap there you can't.  

So, the goal is to migrate the unallocated space to precede the swap partition.  This is done so using the following procedure..

First, resize your extended partition (not the one labeled linux-swap) to include the free space.  In my case this is done by selecting /dev/sda2 and then selecting the Resize/Move button.  In the popup, simply drag the arrow on the right side of the bar to include all of the free space available and again select Resize/Move.  

Just to note, we are not actually performing and moves or resizes at this point, we are simply just creating a chain of commands that gparted will follow once it is applied.  You can either apply at the end of each step, or wait till the bitter end and do it, its up to you.  Either way when you are done, you should see your unallocated space move into /dev/sda2 as shown below.

So, as you can see from the screen capture, the next thing we need to do is move that swap partition to the end of /dev/sda2.  This will allow us to proceed with the next few steps that we need to perform in order to accomplish our end goal of expanding /dev/sda1.  This time we will need to select the linux-swap partition (/dev/sda5) and select Resize/Move.  Inside the popup, this time click the actual white space inside the partition and drag the complete box over to the right (don't use the arrows).  This will move the swap partition to the end of /dev/sda2 for us which we will next resize.  Once your looking like the below images, again, click 'Resize/Move'.

Alright, now we are getting somewhere, you should be looking pretty similar to the shot below by now.  And yes you guessed it, now we need to move that unallocated space right outside of /dev/sda2 in order to make it available for the expansion of /dev/sda1.

So, once again select /dev/sda2 and select 'Resize/Move'.  This time we will use the arrows.  What we want is to move the left arrow this time to the right all the way over to edge of the yellow box (swap).  This will resize the complete /dev/sda2 to the same size as the swap it contains (/dev/sda5) and in turn, create that unallocated space in between the partitions.  Once done, click 'Resize/Move'.

Alright, almost there, you should be somewhat similar to the screenshot below.  Our unallocated space is now directly next to our EXT3 partition and no long a member of /dev/sda2.

At this point I usually apply the first three operations before the expansion.  I've noticed sometimes the process will error out if trying to do the following steps without applying first, so go ahead and apply those operations (Hit 'Apply')

So we are now able to simply extend /dev/sda1 into that unallocated space.  Similar to when we extended /dev/sda2, this time select /dev/sda1, click 'Resize/Move'.  In the popup as we did earlier, select the arrow at the right side of the partition, and drag it to the right to merge sda1 with the unallocated space, when done, select 'Resize/Move'.

So, there you have it!  We have moved that unallocated space into our EXT3 partition.  Go ahead and hit apply again to commit that final change and at the end of the process you should have a larger /dev/sda1.

Now, like I said at the beginning, I'm sure there are ways to do this while the VM is online, or I could be doing things completely wrong, but this way has consistently worked for me for both Linux and Windows guests.  That being said, I'm open to other suggestions, leave them your comments, concerns, thoughts, etc below in the comments … 🙂

Switching between Linux virtual terminals within the vSphere Console

One of the most used functions inside a Linux install that I use is the ability to use the keyboard commands CTRL+ALT+F(1-7) to switch between virtual terminals.  For the most part all of my Linux installs are headless, meaning no graphical installations.  I'm more comfortable inside Linux via the command line and find gnome, kde, and other window managers to slow me down.  So, for those installs that do have a window manager I normally either SSH in or switch to a different virtual terminal if on the console.

Enter my problem.  When using the VMware vSphere console on a Linux install simply throwing out a CTRL+ALT+F(1-7) does not work.  As most of you may know, the CTRL+ALT combo is reserved from within the vSphere console in order to release the hold on the mouse and keyboard.  So, in order to switch between those virtual terminals there is a little bit of a work around as it pertains to hot keys.  What I have found, is that using the combination  CTRL+ALT+SPACE+F2 will take you into a new virtual terminal.  To return back to your x session, simply hit ALT+RIGHTARROW.  

There may be other ways to do this, but I have found this to be the most consistent, so as always, leave me a comment below if I'm crazy or you have any other suggestions, concerns, thoughts, etc…  Changing the keycode to release the mouse and keyboard from the VMs console is not really an option for me, I'm way too used to it.  I'd love to hear from people as this is just one of those things that drives me a little nuts!

Installing VMware Tools on Debian 6 (squeeze)

I was in the process of spinning up a few Debian 6 squeeze servers today when I ran into a little bit of trouble installing VMware tools inside the guest.  These were bare bone installs, containing basically just the system utilities from the tasksel menu (no desktop gui environment).  I proceeded to install VMware tools the way I normally do and have for previous versions of Debian. Right-Click VM -> Guest -> Install/Update VMware Tools when I ran into the following error during the install.

'The path "/usr/bin/gcc" is not valid path to the gcc binary.

It was at this point I when and pulled down the build-essential package, thinking maybe gcc wasn't installed, and I knew that I would need 'make' as well.

After an apt-get install build-essential I received the same error when trying to re-install.  The path /usr/bin/gcc was certain valid, and it most certainly pointed to the the gcc package.  Basically, what I found out is that when you do the base install with Debian 6 Squeeze (through a net-install image) you do not receive the header files for the kernel you are running.  These files are actually needed for VMware tools to recompile and configure itself for the version you are running.  So, an easy fix…

apt-get install linux-headers-$(uname -r)

After running this command, and then re installing your VMware tools again you should see the following during your installation process.

"Detected GCC binary at "/usr/bin/gcc-4.3".

Just accept the default here to not change the path, complete the rest of the VMware tools install and bobs your uncle.