🎉 Announcing new lower pricing — up to 40% lower costs for Cloud Servers and Cloud SQL! Read more →


Highly available filesystem with S3QL and Orbit

S3QL is a filesystem that stores all its data in a remote storage system, such as Orbit. It can only be mounted on one server at a time so it’s unsuitable for load-balanced clusters but it provides an easy way to add highly-available and auto-scalable storage to a single server.

This guide will take you through installing and configuring an Orbit-backed filesystem on an Ubuntu cloud server, suitable for storing web application assets (such as Wordpress uploads). Files written to the filesystem are cached locally and auto-synced up to Orbit. In the event of a server failure, the filesystem can be remounted on another server.

We’re assuming you’re already signed up with Brightbox, have provided your public SSH key and have built a Ubuntu Xenial server.

Create an API client and Orbit Container

We’ll be using API client credentials to authenticate with Orbit, so firstly, create an API client using the Brightbox Manager GUI. Make sure to set its privileges to Orbit Storage Only and note the identifier and secret.

And then create an Orbit container named as you’d like (we’ll use myfilesystem in this example) and give this API client read and write permissions to it.

Install S3QL

Then, SSH into your server and install S3QL:

$ sudo apt-get install -y s3ql

Make directories for the mount point and cache

$ sudo install -d -m 0755 /srv/share
$ sudo install -d -m 0700 /var/cache/s3ql

The export directory will be where S3QL caches data for fast local access - we’ll tell S3QL to limit the size of the cache, don’t worry. The /srv/share directory is where we’ll mount the filesystem.

Configure authentication

Create a file named /etc/s3ql.authinfo

In the file, specify the Orbit url, your container name and the API client identifier and secret:

storage-url: swift://orbit.brightbox.com/myfilesystem
backend-login: cli-aaaaa
backend-password: secret

And set the file permissions so only root can read it:

sudo chmod 0600 /etc/s3ql.authinfo
sudo chown root.root /etc/s3ql.authinfo

Initialise the S3QL filesystem

Now we have to initialise the filesystem using the mkfs.s3ql tool.

$ sudo mkfs.s3ql --cachedir /var/cache/s3ql --authfile /etc/s3ql.authinfo -L myfilesystem --plain swift://orbit.brightbox.com/myfilesystem

Creating metadata tables...
Dumping metadata...
Dumping metadata...
Compressing and uploading metadata...
Wrote 111 bytes of compressed metadata.
Cycling metadata backups...
Backing up old metadata...

Test the filesystem

Before going any further, let’s test mounting the filesystem:

$ sudo mount.s3ql --cachedir /var/cache/s3ql --authfile /etc/s3ql.authinfo --allow-other swift://orbit.brightbox.com/myfilesystem /srv/share/

Using 4 upload threads.
Autodetected 65492 file descriptors available for cache entries
Using cached metadata.
Setting cache size to 30979 MB
Mounting filesystem...

$ df -h | grep share
swift://orbit.brightbox.com/myfilesystem/  1.0T     0  1.0T   0% /srv/share

$ sudo umount /srv/share

Configure it to start on boot

Now we just configure S3QL to start on boot and mount the filesystem.

Ubuntu 16.04 Xenial

If you’re using Xenial, you need to write a systemd config file:

Create a file named /lib/systemd/system/s3ql.service with the contents:

Description=mount s3ql filesystem

ExecStart=/usr/bin/mount.s3ql --fg --cachedir /var/cache/s3ql --authfile /etc/s3ql.authinfo --compress none --cachesize 1048576 --allow-other swift://orbit.brightbox.com/myfilesystem /srv/share/
ExecStop=/usr/bin/umount.s3ql /srv/share

Once configured, enable the service and start it:

$ sudo systemctl enable s3ql
Created symlink from /etc/systemd/system/multi-user.target.wants/s3ql.service to /lib/systemd/system/s3ql.service.

$ sudo systemctl start s3ql

Ubuntu 14.04 Trusty

If you’re using Trusty, the instead of systemd you need to write an upstart config:

Create a file names /etc/init/s3ql.conf with the contents:

start on (started networking)
stop on starting rc RUNLEVEL=[016]
expect stop
kill timeout 300
limit nofile 66000 66000
console log

pre-stop script
    umount.s3ql /srv/share
end script

exec /usr/bin/mount.s3ql --upstart --cachedir /var/cache/s3ql --authfile /etc/s3ql.authinfo --compress none --cachesize 1048576 --allow-other swift://orbit.brightbox.com/myfilesystem /srv/share/

Then start the service:

$ sudo start s3ql


We’re disabling compression here with the --compress none options because it’s just wasting cpu if the files you’ll be storing aren’t compressible - you can leave that option out if you want compression enabled as per the default.

We’re also setting cachesize to 1GB. The larger the cache size, the less data s3ql will have to pull from Orbit on reads, so the better the overall performance. It’s a good idea to make this at least as big as your working set (the set of files that are most commonly in use) so they aren’t repeatedly downloaded from Orbit.

Also, the cache size should ideally be at least ten times the size of the biggest file you’ll be storing on the filesystem, so 1GB accommodates up to 100M files well. You can still store larger files, but the performance can be very poor.

File permissions

As with any filesystem, just use chown and chmod as usual to grant access to files and directories for non-root users.

Rectifying a problem

If your server is rebooted without cleanly unmounting the S3QL filesystem for some reason, you’ll have to use the fsck.s3ql command to validate and fixup the s3ql filesystem:

$ sudo fsck.s3ql --cachedir /var/cache/s3ql --authfile /etc/s3ql.authinfo swift://orbit.brightbox.com/myfilesystem


S3QL supports copy-on-write snapshots, handy for fast efficient backups. Say your data is in a directory named /srv/share/data, you take a snapshot of it using the s3qlcp command and store it in your snapshots directory:

$ sudo du -sh /srv/share/data/
78M	/srv/share/data/

$ sudo mkdir /srv/share/snapshots

$ sudo s3qlcp /srv/share/data /srv/share/snapshots/data-20160830

$ sudo du -sh /srv/share/snapshots/data-20160830/
78M	/srv/share/snapshots/data-20160830/

The resulting snapshot is almost instantaneous and takes no extra space (until files in the original data directory are modified). You can also make the resulting snapshot directory immutable, to prevent it being removed or modified itself, using the s3qllock command:

$ sudo s3qllock /srv/share/snapshots/data-20160830/

If you later need to delete your snapshot, you can remove it using the s3qlrm command:

$ sudo s3qlrm /srv/share/snapshots/data-20160830/

Getting stats

You can use the s3qlstat command to get information about a mounted S3QL filesystem. You can see that this one with the snaphot I just made is listed as ~154 MiB of data despite only actually using 71 MiB, thanks to the copy-on-write:

$ sudo s3qlstat /srv/share/
Directory entries:    1703
Inodes:               1725
Data blocks:          442
Total data size:      154 MiB
After de-duplication: 71.0 MiB (46.01% of total)
After compression:    71.0 MiB (46.01% of total, 100.00% of de-duplicated)
Database size:        320 KiB (uncompressed)
Cache usage:          77.1 MiB (dirty: 0 bytes)


Data is already encrypted by Orbit in transit and at rest, but you can do your own encrypting within S3QL prior to uploading to Orbit if you prefer. See the S3QL documentation for more details.

On performance

Since S3QL maintains its own local database of metadata most normal filesystem actions (listing files, creating directories etc) are almost as fast as using a native filesystem. Reading data will be limited to the speed at which the data can be downloaded from Orbit, unless it’s in the cache in which case, again, it’s almost native speeds. Writes are first buffered to the cache, so again are very fast but if S3QL is busy processing and uploading data, it can slow down a bit.

So it’s great for handling uploaded assets from your web application but it’s not recommend for storing MySQL databases or anything like that.

Last updated: 22 Mar 2023 at 11:32 UTC

Try Brightbox risk-free with £50 free credit Sign up takes just two minutes...