Docker mount backups with Unison

Docker is great, mostly. Except for backups

I love docker, its amazing and is truly transforming our world of tech for the better. Although docker is super nice, there is one thing that is a pain in the ass to do, and that is backups. I’m sure you could say doing consistent backups in general is hard regardless of which tool/product you’re using, and I’d agree. Backups are never given the TLC they deserve and are sometimes completely disregarded. Luckily I have all too much free time on my hands and decided to draft up a backup process for all my docker containers. Below are a few details about my environment to give you some context:

  • I have 5 docker hosts (4 physical, 1 virtual)
  • I have an unRAID server with NFS shares
  • I backup the NFS shares on my unRAID server daily to Backblaze B2 (Soon to be Wasabi!) via rclone

Docker and NFS shares

So initially it really came down to a no-brainer here, just use NFS mounts on the docker hosts to my unRAID server and backup those directories daily right? Wrong. Using NFS bind mounts for docker containers or even a docker volume NFS plugin all have issues with consistent I/O across the network, leading to containers failing to write to disk as well as other issues (some refuse to start up!). I believed I was in the right direction, just needed to iterate on my backup strategy.

A local cache

I decided to introduce a local cache of all my docker data on that particular host, then backup the folder once a day over NFS using unison. unison is a great tool that does bidirectionaly syncing, sort of like dropbox but on steriods and without the CPU hogging python-based client. So after reading through the documentation, it was off to the races (or off to the keyboard is a bit more honest). My folder structure looked like this:

/opt
    /docker
        /local
            (local cache data)
        /remote
            (NFS share)

Finishing touches

Now after I setup the folder structure I needed, I wrote a quick shell script to do the following for each container:

  1. Stop the container
  2. Backup it’s corresponding folder in /opt/docker/local to /opt/docker/remote
  3. Start the container

And that was it! My backup strategy was in place, with automatic inclusion of new containers and all stored centrally on my unRAID server! Uploading to the cloud was the last step, but nothing a little elbow grease and a decent helping of rclone can’t handle.