Ghost CMS & Linux - Fixing "No Space Left on Device" Issue

I was quite surprised to found out that I was greeted with an annoying HTTP 503 error on my site over the Christmas period. After taking a closer look at the logs on Amazon, it turns out that I had actually run out of disk space on the instance.

A few years ago, I transitioned my blog from a custom ASP.NET website to Ghost CMS. I've been really happy with Ghost - it's easy to set up and get running, and the Ghost community is really great.

Ghost CMS Logo

Life often gets in the way of blogging, and I haven't made any new posts to this blog for a while. I've had a few on and off issues with Amazon EC2 Linux instances and this blog over time, but generally things were working as expected. The blog has largely remained untouched for a year or so. Which is why I was quite surprised to found out that I was greeted with an annoying HTTP 503 error on my site over the Christmas period.

I thought this might just be an issue with the site, so I tried the usual Stop Instance and Restart Instance. That didn't help.

After taking a closer look at the logs on Amazon, it turns out that I had actually run out of disk space on the instance. This is a bit weird considering I had 10 GB assigned to the volume - after all, this is only a small blog!

Increasing the size of the volume

My first thoughts were to get the site back up and running by increasing the size of the volume (or disk space) assigned to the instance. You can do this from the EC2 Management Console by selecting the instance, choosing Storage and the clicking on the Volume ID (highlighted in yellow below).

Select the Volume to update

Next, select from the Actions drop down menu and choose Modify Volume.

Choose the new size of the volume and select OK.

Using Growpart

Once you've increased the size of the volume, it turns out that there is still one more step that needs to take place. You need to tell the partition to use the "new space" that you've just given it. Without doing this, it is still assigned the old volume size and you haven't actually made use of the new space you've given it.

You can see this by SSH'ing into the instance and typing lsblk in the terminal.

NAME    MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
xvda    202:0    0   40G  0 disk
└─xvda1 202:1    0   10G  0 part /
loop1     7:1    0 97.9M  1 loop /snap/core/10444
loop3     7:3    0 97.9M  1 loop /snap/core/10577
loop4     7:4    0 55.4M  1 loop /snap/core18/1932
loop6     7:6    0 55.4M  1 loop /snap/core18/1944

In my case the partition xvda1 is still assigned 10 GB, but the root xvda uses 40 GB.

The simple solution is to run the following command on the root partition and tell it to start using the new space it has been allocated:

$ sudo growpart /dev/xvda 1

However, when I did this I was presented with the following error:

mkdir: cannot create directory ‘/tmp/growpart.2626’: No space left on device

Arrrgh! This meant that I didn't even have enough disk space to expand the disk space. I was screwed! One of the suggestions that was mentioned online was to use the $ apt-get autoremove command to remove those dependencies that were installed with applications and are no longer used by anything else on the system. Unfortunately, I was also met with the "No space left on device" error when I ran the command.

I searched for other files to remove, but being a bit of an amateur with Linux, I decided to delete the safest files....the log files. I did this by typing the following command in the terminal:

$ find /var/log -type f -delete

Whew! This bought me an additional 300 Mb which was just enough to run the growpart command again.

$ sudo growpart /dev/xvda 1

Success! If I run the lsblk command to verify that partition 1 is expanded to 40 GB, I see the following:

NAME    MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
xvda    202:0    0   40G  0 disk
└─xvda1 202:1    0   40G  0 part /
loop1     7:1    0 97.9M  1 loop /snap/core/10444
loop3     7:3    0 97.9M  1 loop /snap/core/10577
loop4     7:4    0 55.4M  1 loop /snap/core18/1932
loop6     7:6    0 55.4M  1 loop /snap/core18/1944

I ran the resize2fs command to resize my file system. It can be used to enlarge or shrink an unmounted file system located on device.

$ sudo resize2fs /dev/xvda1

Finally, I started the Ghost CMS again with the following command:

sudo /opt/bitnami/ctlscript.sh start

Once I ran that command I was greeted with:

ℹ Checking if logged in user is directory owner [skipped]
✔ Checking current folder permissions
✔ Validating config
✔ Checking memory availability
✔ Starting Ghost
You can access your publication at http://xx.xxx.xx.x:80
Your admin interface is located at http://xx.xxx.xx.x:80/ghost/

I never thought I'd be so happy to see the output from a terminal!

Summary

I still haven't been able to get to the bottom of why a small Ghost CMS should be taking up such a large amount of disk space, but for the moment I am happy that my site is up and running. I have Cloudwatch alarms in place to alert me if the disk space grows too large now.

If anyone has any guesses as to why this happens with Ghost hosted on Linux, let me know!