Split DNS with Caddy and Docker

Split DNS

Split DNS is a fairly new concept to me, having DNS records that resolve to one address internally and another externally is just weird in my opinion. I’d much prefer some sort of intelligent web server that based on the client’s IP address and reverse proxy the connection to either an internal or external endpoint, but apparently I’m the minority in this regard because Split DNS is a widely accepted concept and is extremely common in enterprise environments. So following the thought process that these enterprise level appliances and networking gurus use split DNS, I decided to use it as well. Setting it up was fairly simple, I host my domain (tpage.io) in AWS’s Route53 service and run a pfSense firewall as my authoritative DNS server within my home network. I added the correct corresponding DNS records to each with their respective IP address and I was done.

Caddy

Caddy is great, I love it. I used to be a hardcore nginx fanboy, and nginx is definitely used now a days as the de-facto webserver for web applications. Caddy changed my mind, and I haven’t looked back. Automatic TLS certificate provisioning along with dead simple virtual host stanzas leaves me without having to worry about expiring certificates or not passing in the correct headers/flags for reverse proxying. Its written in go, statically compiled, and is built to allow modular plugins to be included depending on your use case for it. I cannot brag on this little webserver enough!

TLS Cerfiticates and renewals

Another great feature of Caddy is how it handles TLS certificate provisioning and renewals automatically. You supply it with the secrets to manage your DNS records, and it does the rest for you. It recently (within the past year) came out with support for wildcard certificates now that Let’s Encrypt rolled them out to production. One thing I will note is that if you are running caddy instances in docker containers, make sure you bind mount the SSL certificate storage path so you aren’t constantly getting new certificates everytime you restart/update your containers (and also run the risk of being rate limited by Let’s Encrypt’s API endpoints)

A bit of context about my setup

I run two instances of caddy, one on my AWS EC2 instance as a reverse proxy (hereby referred to as external) and one on an Intel NUC that lives within my home network (hereby referred to as internal). I thought about leveraging the VPN connection that connects my home network to my Amazon EC2 instance, and using external as my reverse proxy for both internal and external, but ultimately decided against it for latency and availability reasons. Both internal and external use the same docker data and run independantly of each other by specifying separate caddy files, this allows me to consolidate my backups of each without having to worry about overwriting files or getting confused. One important thing to note is that external is the only caddy instance that has access to renew certificates (both because it is externally facing, and has an AWS IAM instance role allowing it to manage my Route53 DNS records), but both instances always have up to date certificates and I have seemless SSL connections both inside my network and outside. I go into more detail about how I accomplished this below.

Propogating new TLS certificates from external to internal

If you read my last post, this should come as no surprise. I used unison. Unison is like rsync except it does two way syncing, it intelligently determines which files to sync in which direction so both source and destination are completely the same. So I took advantage of this by doing external <-> backup server <-> internal, which means that everytime external renews the wildcard certificates, they will get synced to backup server, and then synced to internal. Once the backups are completed, the containers are restarted and internal automatically loads up the renewed certificates. It was a surprisingly smooth process and I’m definitely happy with it.