Skip to content

Use of Docker in DeMi

The default way to use DeMi is from OCI containers, colloquially called Docker1 containers. All sites using DeMi need to have a working Docker installation on all participating hosts.2

We make use of docker compose to group containers into a "stack" and automate container lifetime cycles. Deployments are made up of a set of compose.yaml files organized in a directory called stacks/. This way to organize has emerged in recent times and is supported by tools such as Dockge.

Building the OCI images mostly happens in the GitLab CI using the CERN IT Linux Group ci-tools/docker-builder based on Kaniko.

Docker Installation

AlmaLinux9

Simply follow the official Docker instructions for CentOS (even for AlmaLinux, the Fedora packages do not seem to work) to install Docker CE. You need root or sudo privileges.

Be sure to remove any previously installed Docker installations that you do not want to use. For example installed through snapcraft on Ubuntu.

Bash
sudo dnf -y install dnf-plugins-core
sudo dnf config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
sudo dnf install docker-ce docker-ce-cli containerd.io
Only AlmaLinux9 is supported for deployments now due to underlying hardware requirements such as FELIX.

WSL2 / Windows

For development you are of course free to use whatever Docker platform you want. On Windows install Docker Desktop and enable the WSL2 distribution you are using.

If you use VSCode there is an command to install Docker if you have the Dev Containers extension installed in the command pallette.

Docker Post-Installation

For post-installation, it is, again, best to follow the official instructions. A few words on common issues in the following.

Docker group and user

  • You may need to create the docker group and add the user $USER you are going to use with Docker to the docker group 2.
Bash
sudo groupadd docker
sudo usermod -aG docker $USER

Docker Data Directory

You will need to make sure that the Docker data directory (which will contain all the images, containers and volumes) points at a large enough and local disk - O(100)Gb or more.

  • Do not use the default system disk on vanilla el-9 installations as it typically is too small (~60 Gb) and not for data.
  • Do not use network attached storage as is this too slow even with a fast 10GbE backbone. Also this may be a shared home. You want to use a separate `data-root for every Docker installation!

The default location is /var/lib/docker which may not be on a large enough disk out-of-the-box (see above).

Since the docker user is a local system user and does not have a default home directory, a good choice might be to use /home/docker, if /home is a large enough local disk and suitable for local home directories. In order to move the data-root there, put this in a file /etc/docker/daemon.json:

Text Only
{
  "data-root" : "/home/docker/data-root",
  "log-driver": "local"
}

We seize the opportunity to also configure logging drivers to the 'local' driver which uses a compact format and rotates per default to prevent disk exhaustion.

Then enable and start the docker daemon

Text Only
sudo systemctl enable --now docker

Advanced: Remote Access to Docker Daemon - Docker Contexts

In order to control all servers from any other server we use Docker contexts to connect to the Docker socket through ssh 3.

To make this method work you need to set up passwordless access to the servers.

This typically involves - Having a working Kerberos installation (eg. provided by locmap on CERN Centos 7 machines). and/or - Setting up SSH key-pair authentication - generate a key-pair for the user that will be doing the orchestration. - Use ssh-copy-id to copy the SSH keys to the servers. - Run ssh-add to add them to the running ssh-agent.

Docker Best Practices

  • Instead of volumes managed by Docker, we prefer local bind mounts for easier access to the contents.
  • For single container instances or testing the bind mounts they should go into a local directory .docker/
  • If this is on a remote server (depending Docker context) the instance path must exist on the remote server as well.

Running Docker images

Our aim is to provide ready-to-use images for most users which can be flexibly configured at runtime (using environment variables and bind mounts). To run the images we provide compose projects, which can simply be started by doing

Bash
docker compose up
The compose files are configured via .env files which are created by the configuration scripts.

Building Docker images

Local docker build

Mainly for development or testing purposes. Building (and all other operations) via docker compose is always preferred.

Bash
docker compose build

Using GitLab CI/CD

This requires a .gitlab-ci.yml.

We use the Kaniko build project by the CERN IT Linux Working Group: ci-tools/docker-builder.

Some archival instruction

DEPRECATED

Installation for CentOS 7:

Use the convenience script

Bash
curl -fsSL https://get.docker.com -o get-docker.sh
sudo sh ./get-docker.sh # [--dry-run] to see what will be done
or install from the Docker repos
Bash
sudo yum install -y yum-utils
sudo yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
sudo yum install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
sudo systemctl enable docker  # to start docker at server startup
sudo systemctl start docker   
sudo docker run hello-world   # test


  1. While currently we do mainly use the Docker infrastructure for running containers, the possibility to use different container runtime like podman will be supported for the future. Please let us know if you're interested to contribute. We already use non-Docker OCI tools such as kaniko and crane

  2. For the time being we do not support rootless docker since this contradicts our hardware access requirements for low-level system ressources. We might switch to a rootless system later (podman, apptainer, etc.) or for microservices which do not require hardware access. 

  3. For simplicity and security reasons we do not currently support other connection methods such as TCP or Portainer Edge.