Monday, 28 July 2025

Using RabbitMQ as a transport layer for Symfony Messaging

G'day:

First up, I'm building on the codebase I worked through in these previous articles:

Reading that lot would be good for context, but in short the first two go over setting up a (very) basic CRUD website that allows the editing of some entities, and on create/update/delete also does the relevant reindexing on Elasticsearch. The third article removes the indexing code from inline, and uses the Symfony Messaging system to dispatch messages ("index this"), and message handlers ("OK, I'll index that"). These were all running in-process during the web request.

The overall object of this exercise is to deal with this notion, when it comes to the Elasticsearch indexing:

[That] overhead is not actually needed for the call to action to be provided[…]. The user should not have to wait whilst the app gets its shit together[.]

When serving a web request, the process should be focusing on just what is necessary to respond with the next page. It should not be doing behind-the-scenes housework too.

Using the messaging system is step one of this - it separates the web request code from the Elasticsearch indexing code - but it's all still runnning as part of that process. We need to offload the indexing work to another process. This is what we're doing today.


Step 0

Step 0 is: confirm RabbitMQ will work for me here. I only know RabbitMQ exists as a thing. I've never used it, and never actually even read any docs on it. I kinda just assumed that "well it's an industry-standard queuing thingey, and Symfony works with a bunch of stuff out of the box, so it'll probably be fine". I read some stuff on their website (RabbitMQ), and RabbitMQ's Wikipedia page too. Seems legit. And I also checked Symfony for mention of integrating with it, and landed on this: Using RabbitMQ as a Message Broker, and that looked promising.


RabbitMQ Docker container

Something I can do without messing around with any app code or Symfony config is getting a RabbitMQ container up and running (and sitting there doing nothing).

It has an official RabbitMQ image, and the example docker run statements look simple enough:

docker run -d --hostname my-rabbit --name some-rabbit rabbitmq:3

As this is gonna be used internally, no need for runing securely or with a non-default user etc, but obviously all those options are catered for too.

I also noted there's a variant of the image that contains a management app, so I decided to run with that. Converting the generic docker run statement to something for my docker-compose.yml file was easy enough:

docker/docker-compose.yml
services:

  # [...]

  rabbitmq:
    container_name: rabbitmq

    hostname: rabbitmq

    image: rabbitmq:4.1.2-management

    ports:
      - "5672:5672"
      - "15672:15672"

    stdin_open: true
    tty: true

Port 5672 is its standard application comms port; 15672 is for the manager. The hostname is needed for RabbitMQ's internals, and doesn't matter what it is for what I'm doing.

Building that worked fine, and the management UI was also up (login: guest, password: guest):

The only thing I'm going to be using the management UI for is to watch for messages going into the queue, should I need to troubleshoot anything.


Splitting the PHP work: web and worker

The main thing I am trying to achieve here is to lighten the load for the web-app, by farming work off to another "worker". This will be running the same codebase as the web app, but instead of running php-fpm to listen to traffic from a web server, it's going to run [whatever] needs to be run to pull messages of the queue and handle them. The web app puts a message on the queue; the worker app pulls them off the queue and does the processing.

So I need a second PHP container.

My initial naive attempt at doing this was to simply duplicate the entry in docker-compose.yml, and remove the port 9000 port mapping from php-worker as it won't be listening for web requests. This build and ran "fine", except for the entrypoints of both php-web and php-worker conflicted with each other, as they were both trying to do a composer install on the same volume-mounted vendor directory (the physical directory being on my host PC). This screwed both containers.

After a lot of trial and error (mostly error), I came up with a process as follows:

  1. Copy composer.json and composer.lock to /tmp/composer in the image file system, and run composer install during the build phase. This means that processing is already done by the time either container is brought up. For the Dockerfile to be able to do this, I needed to shift the build context in docker-compose.yml to be the app root, so it can see composer.json and composer.lock.
  2. Having those files in /tmp/composer/vendor is no help to anyone, so we need to copy the vendor directory to the app root directory once the container is up.
  3. As both php-web and php-worker need these same (exact same: they're looking at the same location in the host file system) vendor files, we're going to get just php-web to do the file copy, and get php-worker to wait until php-web is done before it comes up.

Here are the code changes, first for php-web

# docker/docker-compose.yml

services:
  # [...]

  php:
    container_name: php
    build:
      context: php
      dockerfile: Dockerfile

  php-web:
    container_name: php-web
    build:
      context: ..
      dockerfile: docker/php/Dockerfile

    # [...]
    
    entrypoint: ["/usr/local/bin/entrypoint-web.sh"]

  php-worker:
    container_name: php-worker
    build:
      context: ..
      dockerfile: docker/php/Dockerfile

    env_file:
      - mariadb/envVars.public
      - elasticsearch/envVars.public
      - php/envVars.public

    stdin_open: true
    tty: true

    volumes:
      - ..:/var/www

    healthcheck:
      test: ["CMD", "pgrep", "-f", "php bin/console messenger:consume rabbitmq"]
      interval: 30s
      timeout: 5s
      retries: 3
      start_period: 10s

    extra_hosts:
      - host.docker.internal:host-gateway

    secrets:
      - app_secrets

    entrypoint: ["/usr/local/bin/entrypoint-worker.sh"]

    depends_on:
      php-web:
        condition: service_healthy

Notes:

  • Just the naming and build context changes for the original PHP container.
  • Oh and it has its own entrypoint script now.
  • The config for php-worker is much the same as for php-web except:
    • Port 9000 doesn't need to be exposed: it's not going to be serving web requests.
    • It overrides the healthcheck in the Dockerfile with its own check: just that messenger:consume is running.
    • It has a different entrypoint than the web container.

Here's the relevant Dockerfile changes:

# docker/php/Dockerfile

FROM php:8.4.10-fpm-bookworm

RUN ["apt-get", "update"]
RUN ["apt-get", "install", "-y", "zip", "unzip", "git", "vim", "procps"]

# [...]
COPY docker/php/usr/local/etc/php/conf.d/error_reporting.ini /usr/local/etc/php/conf.d/error_reporting.ini
COPY docker/php/usr/local/etc/php/conf.d/app.ini /usr/local/etc/php/conf.d/app.ini

# [...]

RUN pecl install xdebug && docker-php-ext-enable xdebug
COPY docker/php/usr/local/etc/php/conf.d/xdebug.ini /usr/local/etc/php/conf.d/docker-php-ext-xdebug.ini

# [...]

WORKDIR /var/www
ENV COMPOSER_ALLOW_SUPERUSER=1

COPY --chmod=755 usr/local/bin/entrypoint.sh /usr/local/bin/
ENTRYPOINT ["entrypoint.sh"]
WORKDIR /tmp/composer
COPY composer.json composer.lock /tmp/composer/
ENV COMPOSER_ALLOW_SUPERUSER=1
RUN composer install --no-interaction --prefer-dist --no-scripts

# [...]

COPY --chmod=755 docker/php/usr/local/bin/entrypoint-web.sh /usr/local/bin/
COPY --chmod=755 docker/php/usr/local/bin/entrypoint-worker.sh /usr/local/bin/


EXPOSE 9000

And the entry point scripts:

# docker/php/usr/local/bin/entrypoint-web.sh

#!/bin/bash

rm -f /var/www/vendor/up.dat
cp -a /tmp/composer/vendor/. /var/www/vendor/
touch /var/www/vendor/up.dat

exec php-fpm
# docker/php/usr/local/bin/entrypoint-worker.sh

#!/bin/bash

exec php bin/console messenger:consume rabbitmq

That up.dat is checked in php-web's healthcheck now:

# bin/healthCheck.php

if (file_exists('/var/www/vendor/up.dat')) {
    echo 'pong';
}

This means it won't claim to be up until it's finished copying the vendor files, and php-worker won't come up until php-web is healthy.

I'm pretty sure those're all the PHP changes. I now have two PHP containers running: one handling web, the other handling messages.


Symfony config

The Messenger: Sync & Queued Message Handling docs explained all this pretty clearly.

I needed to install symfony/amqp-messenger, and for that to work I also needed to install the ext-amqp PHP extension. This needed some tweaks in the Dockerfile:

# docker/php/Dockerfile

# [...]

RUN [ \
    "apt-get", "install", "-y",  \
    "libz-dev", \
    "libzip-dev", \
    "libfcgi0ldbl", \
    "librabbitmq-dev" \
]
# [...]

RUN pecl install amqp
RUN docker-php-ext-enable amqp

# [...]

Then I needed to configure a MESSENGER_TRANSPORT_DSN Symfony "environment" variable in .env:

MESSENGER_TRANSPORT_DSN=amqp://guest:guest@host.docker.internal:5672/%2f/messages

(In prod I'd have to be more secure about that password, but it doesn't matter here).

And finally configure the Messaging system to use it:

# config/packages/messenger.yaml

framework:
  messenger:
    transports:
      rabbitmq:
        dsn: '%env(MESSENGER_TRANSPORT_DSN)%'
    routing:
      'App\Message\*': rabbitmq

At this point when I rebuilt the containers, everything was happy until I ran my code…


App changes

In my last article I indicated some derision/bemusement about something in the docs:

The docs say something enigmatic:

There are no specific requirements for a message class, except that it can be serialized

Creating a Message & Handler

I mean that's fine, but it's not like their example implements the Serializable interface like one might expect from that guidance? From their example, I can only assume they mean "stick some getters on it", which is not really the same thing. Oh well.

Using Symfony's Messaging system to separate the request for work to be done and doing the actual work

And now I come to discover what they actually meant. And what they meant is that yes, the Message data should be Serializable. As in: via the interface implementation.

I had been passing around a LifecycleEventArgs implementation, eg one of PostPersistEventArgs, PostUpdateEventArgs or PreRemoveEventArgs, eg:

# src/EventListener/SearchIndexer.php

class SearchIndexer
{
    public function __construct(
        private readonly MessageBusInterface $bus
    ) {}

    public function postPersist(PostPersistEventArgs $args): void
    {
        $indexMessage = new SearchIndexAddMessage($args->getObject());
        $this->bus->dispatch($indexMessage);
    }

    public function postUpdate(PostUpdateEventArgs  $args): void
    {
        $indexMessage = new SearchIndexUpdateMessage($args->getObject());
        $this->bus->dispatch($indexMessage);
    }

    public function preRemove(PreRemoveEventArgs $args): void
    {
        $indexMessage = new SearchIndexDeleteMessage($args->getObject());
        $this->bus->dispatch($indexMessage);
    }
}

And those don't serialize, so I had to update the code to only pass the stdclass object that getObject() returned. And then likewise update src/MessageHandler/SearchIndexMessageHandler.php now that it doesn't need to call getObject itself, as it's already receiving that, eg:

#[AsMessageHandler]
public function handleAdd(SearchIndexAddMessage $message): void
{
    $this->searchIndexer->sync($message->getArgs()->getObject());
}

#[AsMessageHandler]
public function handleUpdate(SearchIndexUpdateMessage $message): void
{
    $this->searchIndexer->sync($message->getArgs()->getObject());
}

#[AsMessageHandler]
public function handleRemove(SearchIndexDeleteMessage $message): void
{
    $this->searchIndexer->delete($message->getArgs()->getObject());
}

Once I fixed that one glitch: it worked. When I edited an entity I could see a message going into the RabbitmQ queue (via the manager UI), and could see it being removed, and could see the results of the Elasticsearch update. Cool.

It took ages to get the PHP containers working properly - mostly the composer installation stuff - but the RabbitMQ and Symfony bits were really easy! Nice.

Righto.

--
Adam