Using Laravel queues with Coolify and supervisord
nginx
and php-fpm
running!Coolify is ‘self-hosting with superpowers’ and is aiming to provide an alternative to Heroku and Vercel.
For the last several years, I’ve been using Ansible to deploy my projects, but I’ve been looking to simplify and get functionality like atomic deployments out of the box. Especially for side projects that I can quickly and easily deploy to DigitalOcean and Hetzner, a mechanism to package things up into container and atomically deploy them is really appealing.
Coolify lets you select to use Docker to build and run your app if you prefer - for some cases, this will be ideal, but I’m using Docker for local development but not looking to use things like Kubernetes in production.
I wasn’t able to find any concrete examples of using Laravel queues with Coolify, so I’ve written up how I’ve achieved this.
I’m using the default choice of Nixpacks to build the project. Nixpacks takes a directory, analyses it, and produces an OCI-compliant image, which Coolify then starts and hot-swaps if your configured health checks pass.
However, the default PHP provider will simply build php-fpm
and nginx
and point them to your Laravel directory. It doesn’t help get your background queues processing. For that, we want to run supervisord
to run multiple background processes to handle your jobs.
Customising the PHP provider⌗
I’ve added a nixpacks.toml
file to my project root, which gets picked up by Nixpacks and lets you extend the default PHP provider.
I’ve based the nginx.template.conf
file included below from the nixpacks PHP provider, and taken the php-fpm.conf
again from the nixpacks PHP provider.
Going block-by-block⌗
[phases.setup]
⌗
- extend the default provider to also install
supervisor
, as well asphp-fpm
andnginx
[phases.build]
⌗
- make the configuration directory for
supervisor
- copy the Laravel worker configuration file into the directory
- copy the overall supervisor config into the directory
- ensure the start script is executable
...
gets replaced with the other, default build commands from the Nixpacks PHP provider
[start]
⌗
- run the included
start.sh
script when the container gets launched by Coolify
[staticAssets]
⌗
This block contains all of the static assets that we want written into the /assets
directory, which can then get copied and/or used.
start.sh
is our startup script which transforms the nginx config file using the provided scripts from nixpacks’ default PHP provider.supervisord.conf
is the overall supervisor config file, which tells it to use a Unix socket instead of network (to ensure there are no port conflicts with multiple running containers while Coolify runs the healthchecks).laravel-worker.conf
is the worker config file to run the Laravel queues. You may wish to customise this command for non-default queues etc. Note: Your app is located in/app
.php-fpm.conf
is the PHP-FPM config file.nginx.template.conf
gets transformed into the final nginx config file by nixpacks; this overrides their default to provide specific customisations, which you may wish to modify:client_max_body_size 35M;
sets the maximumPOST
body sizefastcgi_param PHP_VALUE "upload_max_filesize=30M \n post_max_size=35M";
passes through twophp.ini
settings to override the maximum allowed upload size
Checking the plan⌗
To check the complete build plan that Nixpacks generates, you can run the following from your project root (assuming you have Nixpacks installed and running locally too):
Don’t forget that Nixpacks will also install PHP extensions for you by analysing your composer.json
file, so if you need PHP’s GD library you can simply add "ext-gd": "*",
into your require block and GD will get installed and configured too.
Wrapping up⌗
You could extend the above to add multiple workers controlled by supervisor
easily by just adding another config file to the [staticAssets]
block and copying it into the /etc/supervisor/conf.d/
directory in the build phase. The supervisor config is set to include all files in that directory already so no other modifications should be necessary.
So far I’m very happy with Coolify and looking forward to using it in larger projects, and hopefully this writeup is useful for others!