When I first migrated from physical to virtual I didn’t put any research into my VM allocated HD space, after all it’s thin! Well, that was a giant mistake. I mistakenly filled my allocated space essentially converting a thin image to a thick image. To make matters worse I over-allocated the space triggering the VM to stop. Turns out, not surprisingly, ESXi needs some space on the drive for its own operation (if your datastore is on the same disk as ESXi). So, how do you get out of this sticky situation?

First, you need to re-thin your partition. To accomplish this task you will have to get the VM to boot up again. First you will need to regain a few GB of space on the datastore. The easiest way to do this is simply shutdown another VM that is running (if you can). This will turn off the allocated swap space. Now you will boot back into your over-allocated VM and delete the accidental files, if any. Then, run the following command to zero out the unused bytes of the disk:

cat /dev/zero > zero.fill;sync;sleep 1;sync;rm -f zero.fill

If your using a Windows guest then you will need a utility called SDelete.

Shutdown the VM and login to your ESXi shell. Issue the following command to hole-punch your disk which converts your disk back into thin and deallocates all zero blocks.

vmkfstools -K /vmfs/volumes/datastore1/DebianVM/DebianVM.vmdk

Be sure to use the VM.vmdk and not the VM-flat.vmdk. After this is completed your free space will be reclaimed and you can boot any VMs you may have shut down. At this point we need to prevent this problem from ever occurring again. There are two approaches: permissions and/or resizing the partition and disk.

In linux, it’s possible a disk isn’t mounted where you expect it subsequently filling up the OS volume. To prevent this from happening chmod 000 the mount point before you mount your disks. The location inherits the permissions once mounted so the 000 will be ignored unless the mount is missing. Of course, this can only protect you so much. The worst case scenario, a VM being force offline, is stil possible because the VM still is over-allocated.

The next thing to do is resize the parition using a tool like GParted. This is an image I keep on a datastore because it is so useful. The idea will be to resize the partition down, copy the partition to a new vmdk that is properly sized (much smaller), reinstall the bootloader (grub), and enjoy!

There are a lot of guides on how to use GParted so I will just summarize the basic steps. First, open your vSphere client and add a new virtual disk to your VM on a datastore with space (such as an NFS disk). Make sure you appropriately size this one and select thin. Boot the system up using GParted. Find the original disk (ex. /dev/sda) and resize the partition down.

For a linux machine be sure to leave some space for swap. You do not need to copy the swap over, you can just recreate the partition on the new disk and issue a swapon once the VM is back online.

Once the disk is resized copy it over to your new disk (ex. /dev/sdb). Once completed reboot with a Debian iso (or any preferred disk for reinstalling the bootloader) and select rescue mode. After asking you some questions it will present you with a menu where you can select “Reinstall GRUB bootloader”. It will ask you the disk name so provide it the same disk as the one used in GParted for the copy destination.

Now you can shutdown your VM, disconnect your old vmdk, move the new vmdk to the same controller location as the old vmdk and boot your VM. If all goes to plan your VM should boot right back up with the smaller partition/disk.

I was able to execute the above steps, 100% remotely, using a Windows VM on the same ESXi host in under an hour. The resizing of the partition is the longest part but if your partition isn’t heavily fragmented it shouldn’t take too long. If you have your CD images on an easy to access datastore it will make the operation even quicker. After doing the exact steps a few times I estimate the entire GParted process takes 20 minutes (resizing the partition and copying the data to the new vmdk), the boot loader repair about 5 minutes, and playing with ESXi to switch the settings around a minute or two depending on your familiarity.

The alternative answer to this question posted all over the internet is using vCenter Converter. I installed the application but honestly it gave me too many issues opening my VM remotely and also wouldn’t open my offline VM locally citing “Unable to obtain hardware information” as the error. I decided not to spend much time learning a new tool that I would probably never use again. I was already familiar with GParted and since the partition has to be resized anyways (to avoid corruption to the filesystem) I decided it was the best approach to follow through with GParted all the way.

For the curious, there are methods which use vmkfstools to resize your disk instead of copying to a new virtual disk. These methods involve changing the header information in the .vmdk file to reflect the size you want. Then you use the vmkfstools to copy (thin) the old vmdk to the new. Since it will read the header info when making the new -flat it will generate the new size. Of course this would have to be done AFTER the filesystem has been resized, otherwise data loss is a risk.


0 Comments

Leave a Reply

Avatar placeholder

Your email address will not be published. Required fields are marked *