finding your feet with docker, first steps

Getting Started With Docker

Docker Sep 4, 2021

I am not a developer, nor do I work with dockerfile. I do not create or manage or maintain docker images, and in almost every use case I have I will only ever pull an image rather than use docker build or clone the repository. The following is purely based on what I have learnt since starting my NAS and server journey 10 months ago.

And with that out of the way...

Docker is a way to have multiple programs running (in 'containers') on a single machine and monitor their performance individually. They can be completely sandboxed, or open to the LAN or internet dependent on their function and your need. Whatever happens in the container stays in the container, and will only affect the host machine where you have specified a particular connection (e.g. a volume mapping or using the host network).

One of the best abilities for docker is being able to create or destroy a container without destroying the file system it uses, meaning both the configuration and data remain available.

For example, I have an app  which tracks my network data and saves it periodically. To do this, I've set it up with various credentials/API keys/network ports etc. At one point I no longer need this app, so delete the container.
A week later, I realise it provided me a lot of info I use regularly, so I recreate the container. Do I need to completely reconfigure it or set it up from scratch? No I don't! Is all my previous data still there? Yes it is! Because I only deleted the container and not the separate file system, everything is still in place, ready for me to pickup where I left off.

Docker Installation

To begin using docker and installing containers, you need the docker package. On a Synology NAS this is as easy as going to Package Manager and installing it from there. Docker-compose is normally included in the installation, however the versions of docker and docker-compose can be updated manually and independently.

Docker for Windows also exists, however we're not going to cover that, feel free to explore it for your own requirements or needs.

The standard parts of a container

Almost all containers will have the same basic requirements. These are as follows:

The Image

An image in this sense is a packaged program - think of it like a zipped up file. It is the app or program you are recreating.

Images are held in online repositories (like libraries), and a lot of them are stored on github or dockerhub. The developers who create these images also maintain them, pushing updates and fixes, and those doing active development will also reply to issues or requests you may post yourself (available once you've signed up for a free account).

Volume Mapping

The volume is where your data is stored, and the documentation for the image will normally explain which volume paths inside the container need to be mapped to a volume outside the container (outside meaning on your host machine, and in my case, on my Synology NAS). Sometimes more than one folder or file path will need to be mapped.

These are defined as [/folder/path/on/host/machine:/folder/in/container], and by using the flag -v if using docker run, or the volumes definition if using docker-compose. In most cases, any volume you want to use for a docker container must be created before you create the container, otherwise it will return an error.

It is also possible to define a volume to be created when using docker-compose. The upside is that you do not need to create it in advance, the downside is that you cannot define the folder location, it will always be created in the root docker volume path. To do this you will enter:

    	container_name: containerX
  			- myfolder:/folder/in/container

    	driver: local
An example of some container headings in docker-compose

Note the different indentation of the two volumes entries. The first, which is under services: container: is where the container will look to put its file system. The second, which starts in the same column as services is what tells docker-compose to create myfolder.


These terms refer to the user ID (PUID) and the group ID (PGID) of the user. These are important for making sure that containers have the right permissions to access the volume folder on the host machine. Ideally, you would create the volume folder and the container with the same user, but if that's not possible then you can use PUID and PGID to specify that a different user has ownership and permissions for this container.

You find this info by SSHing into your machine. Typing id will pull up the current user's id credentials, which will include a uid, gid and the groups the user is part of:

how to find your user's UID

If you are logged in as an administrator, you can find another user's ID by typing id [username] such as id Bob or id plexMcPlexface. Note that usernames are case sensitive.

Time Zone

The time zone - specified as TZ - is required for the container to correctly understand your own local time. If this is not specified, the image default will be used, which is normally the same as where the image developer is located. This means that timestamps for logs or other services may not match your own time. The timezone is passed via the TZ=[continent/city] variable, such as TZ=America/LA. A full list of timezones in the correct format can be found here.

Port Mapping

Various services offered by the container may need access outside the container, such as a webgui or a websearch function, or a listening port for integration with another container. In these instances a port (or ports) will need to be mapped, and the image documentation will specify ports on a 1-1 basis - this means that if the container uses port 80, it will suggest a port mapping of 80:80.

These take the form of [hostPortNumber]:[containerPortNumber]. To access the port on your machine, you would type [hostMachineIP]:[hostPortNumber] into your browser. In some cases, the host port number may already be in use by the host machine, but this isn't a problem. Identify an unused port, and use that in place of the [hostPortNumber].

Example: the documentation suggests a port mapping of -p 80:80. However port 80 on your host machine is already in use, so you need to choose a different one. You know that port 1080 is free, so you change the port mapping to -p 1080:80. Your service is now available on [hostMachineIP]:1080.


Unless otherwise defined, if a container stops unexpectedly, it will not restart. This can be avoided by specifying a restart variable, such as restart=always or restart=unless-stopped.

Container Name

If no container name is defined, when the container is created it will be assigned a random name. This can get tricky to track your containers, so we define their names with the container_name= variable.

Assigning a network to the container

As with the container name, if no network is defined in your container setup, a new network will be automatically created. It will be assigned a name normally based on the image name, and assign the next available incremental docker network subnet (meaning if the last time you did this it assigned, and is already in use, it will assign

For this reason I prefer to create my docker networks in advance. See Docker Networks for more information.

Parts of the container setup are grouped into flags or sections, called the 'variable groups'. For instance PUID and TZ are part of the environment variable group, passed either as environment or -e. The types of variable groups are listed in the tabs when using the Synology Docker Package to set up a container

You can use the below sample docker-compose to find all the elements listed above:

    container_name: linx
    image: andreimarcu/linx-server
      - TZ=Europe/Amsterdam
      - PUID=1020
      - PGID=100
      - /volume1/linx/files:/data/files
      - /volume1/linx/meta:/data/meta
      - /volume1/linx/linx-server.conf:/data/linx-server.conf
      - myNetwork
      - "8085:8080"
    restart: unless-stopped
A docker-compose file for a file-share container called Linx

Creating a compose file from a pre-existing container

You may have created some of your containers using the Synology GUI or a docker run command, but now want them as docker-compose.yml files. The following run command will launch a container and then immediately print a compose format in the same SSH session.

docker run --rm -v /var/run/docker.sock:/var/run/docker.sock red5d/docker-autocompose [container-name-or-id] [additional-names-or-ids]

Replace the elements in [] as needed with your container name or ID. The resulting .yml output will likely include more info than you need to recreate the container using docker-compose, so you will need to understand what you can pare down before then using it in your own compose file.

Docker Container List

Head over to this page for a list of containers I've set up in the past and their basic docker-compose files.

Docker Swarm

I'm only including this heading because someone may look for it.

I do not use swarm, and unless you have multiple machines which all need to have the same container deployed on them all, swarm isn't for you. Ignore it.

Getting the most out of docker-compose: tips and tricks
A list of handy tips you can implement immediately when creating your docker compose files
A guide to the different types of Docker Networks
The most important bits of information you need to successfully creating and managing your docker networks while ensuring container connectivity


PTS fell down the selfhosted rabbit hole after buying his first NAS in October 2020, only intending to use it as a Plex server. Find him on the Synology discord channel

Have some feedback or something to add? Comments are welcome!

Please note comments should be respectful, and may be moderated