Using a Synology package, docker run or docker-compose (docker compose) to install a container

How to Install a Container

Docker Aug 23, 2021

There are a number of ways you can install a container. Here we'll cover the main methods.

Synology Docker Package UI

If you don't have a Synology NAS then move on to the next method.

The docker package installation for Synology DSM comes with its own GUI, accessible in DSM. It's a good way to learn the basic moving parts of setting up a container, but other than that it's...ok. It will get the job done, but it's pretty clunky, with a lot of tabs. To create a container, you first need to search the registry for the image you want to find (there can be multiple images with different versions depending on operating system, what web backend you use, etc.) then launch the image (which doesn't actually launch it, it launches the config to create the container) and then fill in all the rest on multiple tabs:

The Registry
Launching an Image
The various settings tabs

Feel free to look around that, but I'm going to be moving swiftly on to two other methods, both of which require you to SSH into your NAS:

docker run

There are a number of docker [command] options, however to get a container up and running you will use docker run. It is a one-time, multi-line run command to create a container, and can look something like this:

docker run \
    --cap-add=NET_ADMIN \
    --device=/dev/net/tun \
    -d \
    -v /volume1/docker/TorrentVPN/resolv.conf:/etc/resolv.conf \
    -v /volume1/Media/NewtoSort:/data \
    -e "OPENVPN_PROVIDER=[VPNPROVIDER]" \ #change what's in the `[]`
    -e "OPENVPN_CONFIG=nl2-ovpn-udp" \
    -e "OPENVPN_USERNAME=[VPNUSERNAME] \ #change what's in the `[]`
    -e "OPENVPN_PASSWORD=[VPNPASSWORD] \ #change what's in the `[]`
    -e "LOCAL_NETWORK=192.168.1.175/24" \
    -e "OPENVPN_OPTS=--inactive 3600 --ping 10 --ping-exit 60" \
    -e FW_DISABLE_IPTABLES_REJECT=true \
    -e "PGID=[YOUR_PGID]" \ #change what's in the `[]`
    -e "PUID=[YOUR_PUID]" \ #change what's in the `[]`
    -p 9091:9091 \
    --sysctl net.ipv6.conf.all.disable_ipv6=0 \
    --name "transmission-openvpn-syno" \
    haugene/transmission-openvpn:latest
An example docker run command to set up transmission with a VPN

I won't go through this line by line, however in essence this will create a container which runs the torrent downloader 'Transmission'.

Of note though - the -v flag is a 'volume mapping'. Everything on the left hand side of the : specifies a folder on your machine, onto which is mapped the file or folder inside the container, which is on the right hand side of the :. Here you can see I have specified two volume mappings - /volume1/docker/TorrentVPN/resolv.conf which is accessible in my folders now holds what the container believes to be /etc/resolv.conf. Data inside the container at /data will be stored on my machine at /volume1/Media/NewtoSort.

To access the gui, it will be available on my local area network on my host machine's IP address at port 9091.

The way you run this command is to simply copy paste, or type it out, as is, changing the variables you want to (in this case the volume mapping, the PGID and PUID, and the port number before the colon). It is however a one time deal - unless you save this somewhere, your code above will not persist. The container will persist, as will the folders you've created for the volume mappings.

docker run is a decent way to create a container, but in my mind not the best way. That would be:

docker-compose

docker-compose is a powerful tool. It relies on a docker-compose.yml file saved in a particular folder to run. There are a number of benefits over docker run:

  • by default, all container setup configurations are saved
  • as all configurations are saved, they can be modified very easily if a mistake is identified, or you need to troubleshoot
  • any container can be created with a simple docker-compose up -d command
  • there are more variables available to the user in setting up a container
  • by making use of a .env file in the same directory as the docker-compose.yml file, sensitive data can be referenced without the need to make it explicit. This is great for sharing docker-compose configurations
  • you can combine multiple containers in one docker-compose.yml
💡
Either use one docker-compose.yml for each individual container/service, OR use multiple services inside a single docker-compose.yml. DO NOT delete or overwrite a docker-compose.yml file once you have your container/service up and running, as future updating and management will become difficult and tricky

Code editors

It's recommended to use a code editor to manage the content of your docker-compose.yml files. Indentation in the code specifies ownership or references, and it's important to keep the right indents in the right places. I use Visual Studio Code on my windows machine, however others are possible. Some use notepad, but at the least I would recommend NotePad++.


Creating the container using docker-compose

Docker-compose is used by creating a file called docker-compose.yml and populating it with various pieces of information. There are some general things that are required for all containers, such as the image name that you will pull to create the container, and other things which vary depending on what variables the image needs to know to operate properly. Once your docker-compose.yml is complete, you will either:

  • Log in to your NAS via SSH, navigate to the directory which your docker-compose.yml is located in and type docker-compose up -d to run the container
  • Use Portainer, a container itself, to manage your other containers and create stacks using docker-compose in a neat GUI

A typical docker-compose.yml may look like this:

##############NETWORKS##############
networks:
  Horus:
    external: true
##############NETWORKS##############

services:
  radarr: #movie search and organizing agent
    image: ghcr.io/linuxserver/radarr
    container_name: radarr
    environment:
      - PUID=$PUID
      - PGID=$PGID
      - TZ=$TZ
      - UMASK=022
      - DOCKER_MODS=gilbn/theme.park:radarr
      - TP_DOMAIN=gilbn.github.io
      - TP_THEME=aquamarine
    volumes:
      - $DOCKERDIR/Radarr:/config
      - $MEDIADIR:/media
    ports:
      - 7878:7878
    labels:
      - com.centurylinklabs.watchtower.enable=true        
    restart: unless-stopped
    networks:
      - Horus

  sonarr: #TV show search and organizing agent
    image: ghcr.io/linuxserver/sonarr:latest
    container_name: sonarr
    environment:
      - PUID=$PUID
      - PGID=$PGID
      - TZ=$TZ
      - UMASK=022
      - DOCKER_MODS=gilbn/theme.park:sonarr
      - TP_DOMAIN=gilbn.github.io
      - TP_THEME=aquamarine
    volumes:
      - $DOCKERDIR/SonarrV3:/config
      - $MEDIADIR:/media
    ports:
      - 8991:8989
    labels:
      - com.centurylinklabs.watchtower.enable=true    
    restart: unless-stopped
    networks:
      - Horus
sample docker-compose for setting up radarr and sonarr
most services will have sample docker-compose.yml files on their github or dockerhub documentation pages, so you don't have to remember everything that might be needed yourself

Let's break down the compose file above:

  1. Indentations are important. Any line that is indented shows that it 'belongs' to the heading above it (a heading always ends with a :)
  2. networks in the first column (i.e. not indented): this defines the networks the containers will use (not always needed if you do not specify a particular network for your containers)
  3. services in the first column: everything under this heading is related directly to specifying the settings/initial set up which the container(s) the docker-compose file will create (required)
  4. radarr in the second column: the name of the service - this is required, and must be unique, not just in this file, but across all your docker containers. It is how this particular container will be identified and assigned a docker ID. You can name this anything you like, but it's generally better to name it something related to the service/app you're installing (required)
  5. image: this tells docker-compose which image to pull to create the container - it's basically the template and the app/service you're installing (required)
  6. container_name: the publicly displayed name for your container (not required, if no name is provided, docker will randomly assign a name)
  7. environment: a list of variables which the container requires. Sometimes a container will not require an environment variables, and sometimes it will require some, but others are available for further control over the container's deployment (i.e. - logs may default to debug, but another option may be verbose and you want to choose that. Not including - logs in the first place means the container will default to debug, but including - logs=verbose delivers verbose logging)
  8. volumes: if the container requires a directory to store data, or needs access to a config file, it will look inside the container for the directory/file on the RIGHT hand side of the :. Everything on the LEFT hand side of the : is a folder/directory on your machine. This is called mapping, where you say, 'hey container, I know you THINK you're putting stuff into the directory on the right, but I'm going to find it on my machine by using the directory on the LEFT!'
  9. ports: if your container has a gui, or needs to be found by other containers or services, then you will need to specify ports. This means that if we use our radarr example above, then we can access our radarr gui at http://yourNASip:7878. Again, right hand side of the : is INSIDE the container, left hand side is OUTSIDE, or on your machine. You may find that some containers want to use the same port, say port 80. If you try and load up two containers which both specify ports: - 80:80 then the second one won't load properly as it will conflict, saying port 80 is already in use. The way we get around that is by changing the machine port (not container) to one not in use. So if you had port 90 free, you would change it to ports: -90:80 (remember, left hand side is machine side) and you can now access your gui/service at http://yourNASip:90
  10. labels: this is a little more advanced, however a quick explanation: if a container image is created in such a way that it can track labels, then labels will allow that container to track info in another container - for example in the compose file above, this label allows my Watchtower container to track my radarr container's image releases. If a newer image is pushed to github, then my Watchtower container sees this, pulls it, and updates radarr for me automatically. Labels are used by various other containers such as Diun and Traefik
  11. restart: this tells the container under what circumstances it must restart if it is stopped. I tend to have it as unless-stopped (meaning unless it receives a specific 'STOP' or 'KILL' command, it will try and restart) but you can change this to always if you prefer
  12. networks in the third column: this is where you specify which network you want this specific service/container to connect to. If you specify a network here, then you must specify a network in the first column. There is no limit to how many docker networks you can connect a container to (as far as I know) but generally you wouldn't need it to connect to more than 3 ever (depending on your routing and security requirements)
the above file will create two containers, radarr and sonarr, and connect them both to the network named 'Horus' - more than one container is generally called a 'stack'. As far as I know, there is no limit to how many individual containers you can put into a single stack
as mentioned above, I like to define my networks. The network 'Horus' is a pre-existing network I created (see Docker Networks for more info)
all variables starting with a $ dollar sign are referenced in the .env file (see Setting up the .env below for more info) meaning that the info beginning with $ in the compose file will be pulled from the .env file. This allows me to use a base .env and docker-compose.yml which can both be copied from one folder to another, and the basics remain the same. Any requirements specific to that particular container can be added easily


A few notes:

  • Anything written after a # is not passed to the container. It is ignored. This means that you can annotate your own comments directly into the code without fear that it will affect the set up
  • The host-side folders specified in volumes must be created prior to creating the container, or an error will be returned

To run the above compose file, using SSH you navigate to the folder on your machine which holds the correct docker-compose.yml file and use the following command:

sudo docker-compose up -d

This will create a stack, which is a group of containers (though can also be used for only one container). I prefer to specify the stack name as well, which is done with the following command:

sudo docker-compose -p "stacknamegoeshere" up -d

I use aliases during my SSH sessions, which mean I can create my own shorthand, so when I do this I type:

d-c "stacknamegoeshere" up -d

where I have created an alias for sudo docker compose -p, shortened to d-c.

we use -d in the command to ensure it runs in detached or daemon mode. This means that once the command has run its course, you will be able to continue to use the same SSH session. If you do not use -d in the command, then you will need to start a new SSH session to continue using SSH, or use ctrl + c to clear the currently running command

When you run the command for the first time, the image will be 'pulled' from the repository, which in another way to say it will be downloaded to your machine. The container will then be created using the variables specified in the docker-compose file.

It is possible to run the same 'up -d' command while a container is running. If it detects there are changes to the docker-compose.yml file since the last 'up -d' command then it will make those changes.


Updating the container version using the docker pull command

I use a container called 'watchtower' to track updates to the images I use (see Watchtower for more info) which automatically tracks and updates my containers, and notifies me via email whenever this happens (you will need to have set up your own smtp credentials and address for email notification). However you can manually search for and pull an updated image by running

docker pull [OPTIONS] NAME[:TAG|@DIGEST]

As an example, to pull the radarr image above (or make sure that I had the latest version of that particular image) that would be

docker pull ghcr.io/linuxserver/radarr:latest

If I did not provide the tag latest, or any other tag, then by default latest would be applied. This tag is helpful if you want to use a specific version number of a particular image.


Setting up the env file

The .env file is a very simple notepad.txt document, used to define the variables found in a docker-compose.yml file.

It is saved in the same directory as the docker-compose.yml file and one of mine looks like this (sensitive information redacted):

a sample .env setup

Information is saved in the [name]=[information] format, and referenced in the docker-compose file by passing [variable]=[$name]. As an example, my PUID=1026 ([name]=[information]). To pass this as a variable in the docker-compose file, it is referenced as PUID=$PUID ([variable]=[$name]).

In this way, we can keep information secure and available for easy copying to new compose files.

To create the file itself, you can use samba on your machine, File Station inside DSM, or SSH into your machine and create it using the touch command once you've navigated to the correct directory.


Portainer - Easy Container Management for Docker
A step-by-step docker walkthrough to installing and configuring Portainer, your one-stop container-management resource
Preshared Keys and more - SSH on Linux and Synology
An easy guide which explains how to access and use your SSH sessions to their fullest, including clients and logging in with preshared keys

PTS

PTS fell down the selfhosted rabbit hole after buying his first NAS in October 2020, only intending to use it as a Plex server. Find him on the Synology discord channel https://discord.gg/vgSq5pcT

Have some feedback or something to add? Comments are welcome!

Please note comments should be respectful, and may be moderated