While working with celery workers it’s important to understand scale. Recently while trying to stand up celery workers for development I was confronted with an opportunity to stand up independent celery boxes to run multiple workers.
First issue was to have them be able to communicate with the broker and backend. The way docker does it by default - if you start services within the same
docker-compose.yml - is by putting all services on the same default network.
So, if we want to connect multiple containers it’s best to specify an external network.
Create a network first
docker network create my-network
By default the driver used is Bridge so it’s a bridged network.
docker network inspect my-network will blurt out all the details for this network.
Here’s our first container with the broker and the backend
version: '3' services: backend: image: 'redis' broker: image: 'rabbitmq' networks: default: external: name: my-network
Celery worker container
version: '3' services: worker: restart: always build: . # assuming context is here networks: default: external: name: my-network
Here’s the Dockerfile to go with the worker
FROM python:3.8 WORKDIR /packages # copy current code context to /packages COPY . . RUN pip install -r requirements.txt CMD ["celery", "worker", "-A", "tasks", "-l", "INFO"]
So in order to communicate with the broker and backend, the worker is going to use
broker host names to communicate with the services.
Here’s the celery config
from celery import Celery app = Celery('tasks', broker='pyamqp://[email protected]'), backend='redis://backend') @app.task def add(x: int, y: int): return x+y
Spin up multiple containers for this service
docker-compose up --scale worker=3
This will spin up three containers of the service worker and show them in the logs and the processes