Moving away from
I've recently made the jump from using
docker run commands to using
as it allows me to store my container configurations as code which makes backing up as well as restoring/updating extremely easy.
Before I would detest having to upgrade my containers, to the point where I downloaded and configured Portainer and Portainer agents on
all of my docker hosts so I could configure everything from some sort of centralized UI.
Portainer and the others
This worked well and has some nice features like the concept of container templates and a web-based terminal that I can use to jump into my container sessions, but it did not have any (supported) way of automatically updating my containers to the latest version. I had to go to each container's UI screen and “Recreate” each container after pulling the latest image. This became unfeasible when I have 5 docker hosts with each running ~3-20 containers each. I looked at multiple different ways of automating this process including:
But I could never get it working the way I wanted to, or the tool was abandoned. So I bit the bullet and one by one I replicated all of my docker containers via docker-compose. I know each entry in the docker-compose.yml file is supposed to represent a collection of “services” that make up an application, but I made my files per host with each “service” being a container I run on that docker host. This makes me think that my docker hosts are “applications” made up of multiple different services.
Once I had my configuration written in the YAML files, it was time to figure out how to deploy and update my containers automatically.
I didn't want to do anything fancy, so I settled on a basic shell script that pulled the latest copy of my compose file
for the host it was running on, and run
docker-compose pull && docker-compose up -d --remove-orphans. This would pull the latest image
versions for all my containers, and recreate them as necessary. The
--remove-orphans flag simply removes containers that aren't a
part of my
docker-compose.yml. This enforces the fact that I need to make sure any containers are spun up via docker-compose.
I set the script up in a cron job to run every hour and I was happy as can be. My images were being pulled and my containers were being updated.
Any new changes I make to my docker images would be deployed to my docker hosts within (at most) an hour!
The next issue I had to face was using secrets within my docker-compose setup. The YAML files let you hardcode secrets directly in
the file itself, as well as specify an
env_file that could contain secrets without putting them directly in the YAML files. Neither of those solutions
worked for me, the repository I host my YAML files in is public to the world. So any sort of secrets being committed to the repository is a no-go.
So off to the whiteboard I went trying to architect some sort of secure way to expose secrets to my containers at run time, and delete them afterwards.
The solution I came up with is as follows:
- Make my
env_filea template and store my secrets using credstash in AWS's DynamoDB
- When the shell script runs to deploy/update my containers, interpolate the secrets into the
- Start the container with the exposed secret
- Delete the
env_fileafter the container has successfully started
I won't go into too much detail on how I accomplish the above, but the process works well and is secure. I understand that if anyone gains access to a shell session inside my containers or on the docker host, they would be able to see all my secrets. This is unavoidable and holds true for any major docker host system out there, its just built into docker itself. The docker guys have been doing great work with things like docker secret, but it won't ultimately solve the problem. You always have to store your secrets somewhere, and if you have some way to access them via automation, then you are at a certain level of risk period.