Celery Topology Middleware¶
The celery middleware follows the architecture shown below. It sits between the frontends that are starting tasks and the backend that are executing them. Its purpose is to route task to specific backends and allow for easy simultanious execution of tasks on multiple backends.
The celery-middleware can also be replaced by a much simpler proxy, which simply routes tasks to their backend, but does not include celery. Therefor it cannot launch multiple tasks simulatiously.
This tutorial helps you to setup your own instance of the celery-middleware, proxy and backends.
Prerequisites¶
Python >= 3.9 required. Currently AlmaLinux9 comes with Python 3.9.18. Otherwise, instructions on how to install python using the python version manager pyenv can be found here.
<!-- ## Install using this repo and Poetry
Poetry needs to be installed. Then do:
poetry shell
poetry install
``` -->
## Install using venv and pip
### Set up the Python virtual environment
###
Create and activate a new python venv:
```shell
python -m venv .venv
source .venv/bin/activate
pip install topology --index-url https://gitlab.cern.ch/api/v4/projects/125755/packages/pypi/simple
Configure the celery topology¶
Configuration of the celery topology is done via a YAML file.
Use a file in the repo (in compose_script/settings/
) or download an example file:
wget https://gitlab.cern.ch/atlas-itk-pixel-systemtest/itk-demo-sw/itk-demo-celery/-/raw/master/middleware/topology/compose_script/settings/settings_felix.yaml

Configure the celery topology to your needs by editing the YAML file. For initial testing the unchanged examples should be sufficient.
System¶
Here the structure of your system is defined. You can freely edit this to represent your topology. This example structure represents the following topology.
There are 2 felix, each with 2 cards with 2 devices and 4 dmas each. Equaling 2*2*2*4=32 backend containers. These containers are named according to their path:
- felix1card0dev0dma0
- ...
- felix2card1dev1dma3
In addition to the structure the name of your system and the backend-image need to be set. The code of the backend-image from the example can be found here.
Tasks¶
List of tasks that are available in the backend api.
Workers¶
Here the workers distributing the tasks from the message queue to the backends are defined. By defining the queues of the workers one can modify which worker distributes tasks to which backends. The queues for each backend correspond to their container names listed above.
Starting the celery middleware¶
Before starting the celery middleware and the proxy, some additional services are required and some are recommended.
Starting other tools¶
To use the celery middleware at least RabbitMQ and Redis are required for message queue and result store, respectively. For the container to be able to communicate with each other they use a docker network named deminet. This network has to be created once per host.
Afterwards the containers can be started using the compose files in the stacks
directory:
cd stacks/rabbitmq
docker compose up -d
cd stacks/redis
docker compose up -d
cd stacks/flower
docker compose up -d
They can also directly be downloaded and started like this:
RabbmitMQ (message queue)
wget "https://gitlab.cern.ch/atlas-itk-pixel-systemtest/itk-demo-sw/compose-collection/-/raw/master/rabbitmq/compose.yaml?ref_type=heads&inline=false" -O rabbitmq_compose.yaml
docker compose -f rabbitmq_compose.yaml up -d
Redis (Result store)
wget "https://gitlab.cern.ch/atlas-itk-pixel-systemtest/itk-demo-sw/compose-collection/-/raw/master/redis/compose.yaml?ref_type=heads&inline=false" -O redis_compose.yaml
docker compose -f redis_compose.yaml up -d
Recommended monitoring tools:
Dozzle (UI for container logs)
wget "https://gitlab.cern.ch/atlas-itk-pixel-systemtest/itk-demo-sw/compose-collection/-/raw/master/dozzle/compose.yaml?ref_type=heads&inline=false" -O dozzle_compose.yaml
docker compose -f dozzle_compose.yaml up -d
Flower (UI for queue)
wget "https://gitlab.cern.ch/atlas-itk-pixel-systemtest/itk-demo-sw/compose-collection/-/raw/master/flower/compose.yaml?ref_type=heads&inline=false" -O flower_compose.yaml
docker compose -f flower_compose.yaml up -d
Creating the compose files for the Celery middleware¶
The syntax of the topology script is:
for example:This generates the compose files needed to start the celery middleware, a proxy that can be used instead of the middleware and other configuration and utility scripts in the proxy
and celery
directories.
Starting the celery middleware¶
the compose.yaml
file in the directory where topology
was run is the top-level compose file and can be started like this:
Opening the UI¶
Out-of-the-box the main services are reachable at the following URLs:
Service | URL | Purpose |
---|---|---|
Celery Middleware UI | http://localhost:8210 | call backends via Celery |
Proxy UI | http://localhost:8211 | call backends directly |
Dozzle | http://localhost:8888 | view container logs |
Flower | http://localhost:5555 | celery monitor |
RabbitMQ | http://localhost:15672/ | message queue UI (login: guest/guest) |
The celery middleware and proxy UI show you a list of available tasks and all the backend servers that can execute them.
To try one of the example tasks, click on the backend(s) that should execute it and then click on a task.
When using the celery middleware multiple tasks can be started simultaneously. The proxy server only allows execution of one task at a time.