Getting the most out of docker-compose
There are a number of ways to create your docker containers. You can use the docker run
commands in SSH, you can use a proprietary system GUI such as the ones provided by Synology or UnRaid systems, or, you can use docker-compose.
If you're reading this I'm going to assume you already use docker-compose or at least have knowledge of it, either with Portainer or through SSH. The following are my top tips for getting the most out of it.
1. Always name your stack when using SSH
This is something I don't see a lot of people do. In Portainer, if you want to use docker-compose to create a container then you must create a stack and you must name it, otherwise it won't let you deploy it. However in SSH this isn't something that's necessary. I find this a bit sloppy, and so I name my stacks whenever I use SSH by using the -p
flag as follows:
docker-compose -p "stacknamegoeshere" up -d
Why do I want to name the stack? Well if I don't, then docker will auto-name other non-specified things like networks with the name of the first container in the stack. This doesn't help me keep things organized in the way I want, so this way of stack-naming helps.
2. Always specify your time zone or local time
Have you ever gone into your container logs and seen that the timestamps don't match your local time? Sometimes you'll even notice that the container GUI has a different time than yours.
This is because, by default, a container will use the time zone it was published in. I live in Japan, and none of the containers I use were published here, so this is important to me for easily and quickly tracking down issues and errors in the logs.
There are two ways I'll cover in which you can add a time zone:
Use the 'TZ' variable
Under environment
you can use the - TZ
variable as follows:
services:
container:
environment:
- TZ=Europe/Amsterdam
The entry itself will use the Continent/City
format, otherwise known as 'database' time zones, and you can find a list of those here.
Use your machine's local time
Whereas the above used an environment
variable, you can actually map the volume containing the local time from your machine to the container as read-only, which allows it to (surprise) use your machine's local time:
services:
container:
volumes:
- /etc/localtime:/etc/localtime:ro
:ro
at the end of the string designates this volume mapping as 'read-only'Other than the fact you've given your container access to another directory on your machine (albeit read-only) I'm not sure there's any real pros or cons for and against either method. You may prefer one way over another, or like me, find that the volumes
route is more consistent. Either way, it will help keep your container time-logging in order.
3. Use a separate .env file
Although not required for docker-compose to work, the .env is a companion file to a docker-compose.yml. It is stored in the same directory and is used to pass information to the docker-compose file in the form of $INFORMATION
. For instance, you might have a 48 character password required for a MYSQL_ROOT_PASSWORD
. You can list this in the .env file as PWD=[my48characterpass]
, and in the docker-compose file your variable is pulled by stating MYSQL_ROOT_PASSWORD=$PWD
.
As an example, here's my whole authelia stack in docker-compose:
###########NETWORKS###########
networks:
Ra:
external: true
###########NETWORKS###########
services:
mysql:
image: mysql:latest
container_name: authelia_mysql
volumes:
- $DOCKERDIR/authelia/sql:/var/lib/mysql
environment:
- MYSQL_ROOT_PASSWORD=$MYSQLROOTPWD
- TZ=$TZ
ports:
- $MYSQLPORT:3306
restart: unless-stopped
labels:
- $DIUN
networks:
- Ra
phpmyadmin:
image: phpmyadmin/phpmyadmin:latest
container_name: authelia_php
ports:
- 8088:80
environment:
- PMA_PORT=$MYSQLPORT
- PMA_HOST=$PHPHOST
- TZ=$TZ
restart: unless-stopped
networks:
- Ra
redis:
image: redis:latest
container_name: authelia_redis
volumes:
- $DOCKERDIR/authelia/redis:/data
- $LOCALTIME
ports:
- 6380:6379
restart: unless-stopped
networks:
- Ra
authelia:
image: authelia/authelia
container_name: authelia
ports:
- 9091:9091
environment:
- PUID=$PUID
- PGID=$PGID
- TZ=$TZ
depends_on:
- redis
- mysql
volumes:
- $DOCKERDIR/authelia/app:/config/
restart: unless-stopped
networks:
- Ra
labels:
- $WATCHTOWER
You'll notice variables such as $DOCKERDIR
and $MYSQLPORT
are used across multiple services in the stack. If I ever had need to change these, I would only need to go into the .env file and make the relevant change once, and it would update for all. Β
The only con I can think of to having a separate .env file is that you do actually need a separate file, and will sometimes need to go back and forth between the two. The advantages however are pretty good, and I've listed some below:
Security
It's debatable whether home users will find this more secure, but for those running multiple servers and sites, this can come in handy.
By storing your sensitive information such as passwords and api keys in the .env file, these do not need to be made explicit in the docker-compose file. This means you can share your docker-compose file across sites without having to include or redact any sensitive info. Each site can run the same compose file in combination with their own .env.
For me, I also benefit from this because I share a lot of my compose files with others on discord or reddit etc. to offer guidance and help when needed. I can just go in, pick it up and paste it in without worrying that I was giving away any important personal data.
some may point out that regardless of putting it directly into the compose file or using the .env file, your data is still unencrypted. This is true. Other users may prefer to look at docker secrets which, although technically only for use with docker-swarm (I'm NOT getting into that on this site) there are 'fake' secrets available to regular docker-compose users
Custom short-hand
Say you've got a stack of containers in a compose file where you're specifying a database, the name, hostname, username and password for that database, and you have another service relying on it which also needs those variables. You don't want to write them out multiple times because they can be long. So you set them as .env variables like DBNAME
and DBPWD
, which are easily memorable and quick to type.
Replication
I maintain a 'base' .env file which I copy into each new docker-compose directory I create. It has basic PGID and PUID for certain users, it contains the docker directory, media directory, two of my domain names, a few labels for things like Watchtower and Diun, and both the environment and volume time zone variables listed above.
In this way, I always have a base that I can immediately add to those docker-compose files which don't already have them, and I don't have to remember long strings or a lot of different information. Β
4. Define a network
A lot of people seem to be fine with letting docker create a bridge network for a container as part of the docker-compose up -d
command. Maybe I'm a bit ocd, but I hate this. The naming convention it uses annoys me, and it won't necessarily use the lowest available docker network (i.e. maybe 172.22.0.0 is available, however it gives you a network on 172.35.0.0 instead). This isn't helpful when managing firewalls, as you then need to go and find out which IP it's used. It also adds some steps when trying to identify networks in general.
You can create networks either in SSH, Portainer, or via the compose file. I recommend either using SSH to create the network before creating the container, or via the compose file (at time of writing, Portainer had some network creation issues):
Via SSH
- Log in, and type the following (this is the most basic command):
docker network create [insert network name here]
- To set your own IP, you would type the following:
docker network create --subnet=172.xx.0.0/24 --gateway=172.xx.0.1 [insert network name here]
/24
after the subnet IP) is required to give the IP rangeIn this way, you can create a custom, named docker bridge network at the IP of your choosing
- To use this network for a container in docker-compose, you would create a
networks
block, starting from the same column asservices
, e.g.
services:
container:
image: container:image
networks:
- MyNamedNetwork
networks:
MyNamedNetwork:
external: true
external: true
part tells docker-compose that the network already existsVia docker-compose
You can specify a named network in your docker-compose file, which will be created the first time you run the docker-compose up -d
command. Consequentially, if you ever run the docker-compose down
command, it will also be removed.
You specify it as follows:
services:
container:
image: container:image
networks:
- MyNamedNetwork
networks:
MyNamedNetwork:
This will create a network called [stack name]_MyNamedNetwork
. I personally don't like having the stack name as a prefix to my network name, normally because I call the network the same as the stack, and I don't want a network to be called portainer_portainer
for instance.
The above will also automatically set the IP for the network. You can also add some parameters to do this yourself, but it's up to you to decide if you prefer using SSH or a docker-compose.yml
to manage networks.
5. Maintain network discipline
There are a few practices which are discouraged when it comes to networks.
Having all your containers on one network
One of docker's great benefits is that each container can be run independently of any other program, and of the host machine processes, meaning it can be super secure. This doesn't mean it's invulnerable to attack however, and sometimes hackers can gain access to a container, especially if you've exposed it to the internet.
If you suffer a breach on a container, the very nature of docker means that provided that container is on a bridge network, the hacker will only have access to that network. But (and I'm sure you can see where this is going) if you have ALL your containers on that same network, then the hacker could conceivably now hack them all.
Using the 'host' network
You've likely seen variables in some compose files which look like network_mode: bridge
and network_mode: host
. These tell the container to join the default bridge network or the host network respectively.
The reason to not use the host network is even stronger than to not use a single bridge network for all your containers. A bridge network is separate from your host machine's network, meaning that host processes cannot be accessed from bridge. Ok, your containers on the bridge network fall to a hacker, but he/she/they don't have access to your machine's network.
BUT! If you have a breached container which is on the host network, then your whole machine becomes susceptible to the same attack, which could be the end of your whole server. You could suffer an encryption ransom attack, where you're locked out of your data and/or machine completely unless you pay a certain sum of money to the attacker, or it's just completely malicious and you lose everything immediately.
This is obviously worst case scenario, and there are other ways to protect yourself, but your security is only as strong as your weakest link. Don't let it be your docker container network!
6. Using docker when not a root user
After you install docker and start running various commands (e.g. docker run
, docker-compose up
, docker network create
etc.) you'll find that you won't have the right permissions, and will have to add sudo
in front of each of these to run the commands as a root user. This can get annoying, and also can potentially reduce your security.
The way around this is to great a docker group, and add your user to it.
We do this with SSH, and once inside we type the following:
sudo groupadd docker
This will add the docker group. Next, we need to add our user to the group:
sudo usermod -aG docker [user]
That's it.
For Synology Users
Again we will use SSH to do this. Log in to your SSH session and type the following:
sudo synogroup --add docker
sudo synogroup --member docker [username]
[ ]
brackets and type your username in that spaceRepeat for all users as required.
7. Mapping your folders using ./
There are lots of ways you can set up your docker and compose folder structure. Some people like having all their compose files separated by folders in a single 'compose' directory, while others like to have the compose file and mapped folders inside the service directory. This tip is for the latter.
When specifying a mapped folder, we could see it as something like this:
volumes:
- /mnt/data:/data
or maybe
volumes:
- /volume1/docker/compose/service/config:/config
In both situations, you need to create the folder structure before you run the up
command.
Lets say you have a folder tree which is based on services, and you keep your docker-compose.yml file inside each service folder. Something like
volume1
|
ββ docker
ββ service1
| ββ config
| ββ data
| |
| docker-compose.yml
ββ service2
ββ config
ββ data
|
docker-compose.yml
Rather than needing to write out the whole folder path in your volume mapping (i.e. /volume1/docker/service1/config:/config) you can simply type:
volumes:
- ./config:/config
The reason for this is that .
before the /
is shorthand for 'this folder'. As you already need to be in the service1
folder to run the docker-compose up -d
command, this is a perfectly acceptable way to create your docker compose file.
8. Proxy your docker socket
Bear with me here, there's a little to unpack.
Some services require access to the docker socket, and in most cases you'll be advised to use the following volume mapping when creating that container:
- /var/run/docker.sock:/var/run/docker.sock
This gives your service full, unrestricted access to your docker socket, which in most cases is fine, but in some cases where you've exposed that service to the internet could be a security concern.
Enter the socket proxy. This is another container which should never be exposed to the internet. It requires a little more config in the docker-compose files of services which need to be added to the socket proxy, but I'll walk you through the steps:
Creating the socket proxy network and container
We're going to do this all in one go.
- Create a
docker-compose.yml
file and edit it - Copy and paste the following:
###################################
networks:
socket_proxy:
name: socket_proxy
ipam:
config:
- subnet: 172.100.0.0/24 #change as necessary
###################################
services:
socket-proxy:
container_name: socket-proxy
image: tecnativa/docker-socket-proxy
restart: always
networks:
- socket_proxy
ports:
- "2375:2375"
volumes:
- "/var/run/docker.sock:/var/run/docker.sock"
environment:
- LOG_LEVEL=info # debug,info,notice,warning,err,crit,alert,emerg
## Variables match the URL prefix (i.e. AUTH blocks access to /auth/* parts of the API, etc.).
# 0 to revoke access.
# 1 to grant access.
## Granted by Default
- EVENTS=1
- PING=1
- VERSION=1
## Revoked by Default
# Security critical
- AUTH=0
- SECRETS=0
- POST=1 # Watchtower
# Not always needed
- BUILD=0
- COMMIT=0
- CONFIGS=0
- CONTAINERS=1 # Traefik, portainer, etc.
- DISTRIBUTION=0
- EXEC=0
- IMAGES=1 # Portainer
- INFO=1 # Portainer
- NETWORKS=1 # Portainer
- NODES=0
- PLUGINS=0
- SERVICES=1 # Portainer
- SESSION=0
- SWARM=0
- SYSTEM=0
- TASKS=1 # Portainer
- VOLUMES=1 # Portainer
The above does a number of things:
- Creates the
socket_proxy
docker network - change this IP if you need to - Creates the
socket-proxy
container, providing access to thedocker.sock
- Attaches the container to the
socket_proxy
network, and sets the variables
You can check out the variables at the docker hub repo here, and when you've got it how you want it, SSH to the directory your docker-compose.yml
is in and run docker-compose up -d
to spin up the proxy container.
Specifying your services to use the socket proxy
Now when you want to provide a container with access to the docker socket, you need to attach it to the socket_proxy
network as well, and add the environment variable DOCKER_HOST=tcp://socket-proxy:2375
(socket-proxy is the container name, and its port is 2375).
For example, let's take Watchtower:
networks:
socket_proxy:
external: true
services:
watchtower: #automatic container version monitoring and updating
container_name: watchtower
image: containrrr/watchtower:latest-dev
environment:
- TZ=$TZ
- DEBUG=true
- WATCHTOWER_LABEL_ENABLE=true
- WATCHTOWER_CLEANUP=true
- WATCHTOWER_INCLUDE_RESTARTING=true
- WATCHTOWER_INCLUDE_STOPPED=true
- WATCHTOWER_NOTIFICATIONS=shoutrrr
- WATCHTOWER_NOTIFICATION_URL=pushover://shoutrrr:[email protected]$PUSHKEY/?devices=$PUSHDEVICE
- DOCKER_HOST=tcp://socket-proxy:2375
command: --interval 21600
restart: unless-stopped
networks:
- socket_proxy
Note the DOCKER_HOST
variable at the bottom of the environment block, and the socket_proxy
specified in networks.
This is by far a more secure way of exposing the docker socket to your services, while keeping it safe from the internet.
9. Do NOT delete your docker-compose.yml
file(s)
Not something I thought I'd need to mention, however having seen a few posts on various forums:
If you use docker compose, then Β your docker-compose.yml
should be treated as the be-all-end-all for managing your container(s). Need to update the image? You'll need the docker-compose.yml
you used to create the container. Need to make a change to a variable? You'll need the docker-compose.yml
. Want to spin down the services and volumes and networks associated with a particular service? You get the picture.
Now there are other ways to do the above without the original file. But they're a faff, annoying, and don't always work in the way you need/want/expect.
So, once more, do NOT delete your docker-compose.yml
files. When you have a new container/service to try, either (preferred) create a new one in a new folder, or (if you feel comfortable that you know what you're doing) add the new container(s) to an existing docker-compose.yml
file without changing any pre-existing container setups.
Bonus Tip!
You can name your docker-compose.yml something else
This really doesn't seem to be widely publicized or even used, but it's a real thing.
One of the things I think could be done better is that there's only one command to raise a container, which requires one specific file name. Β This means that every single docker-compose.yml
file needs to be in its own folder, as you can't have two exact files in the same folder. Computering for Dum-Dums 101 right?
Well, you don't actually need to do this. You could call it compose.yml
or docker.yml
. You could call it by its service name (e.g. portainer.yml
) or you could call it daddy.yml
. Whatever you want.
How do we tell docker-compose to use a differently named .yml file? Well, it's simple. We add a -f [filename]
argument to our docker-compose up
command.
Let's say we have a docker-compose file called radarr.yml
. We raise it as follows:
docker-compose -f radarr.yml up -d
Done. Your system will run whatever's in that .yml file instead of docker-compose.yml.
In this way, you could have one folder for all your docker-compose services and apps. If you use a .env then it would need to be populated in such a way that it satisfies all the services in each of the .yml files, but it's possible. I'm not sure I'd particularly want all my sensitive data in one unencrypted file, but then I'm not you and you can do whatever you want.
Related articles


Have some feedback or something to add? Comments are welcome!
Please note comments should be respectful, and may be moderated