Update: I’ve posted a newer version which also uses supervisor to start and keep nginx and php-fpm running!

Coolify is ‘self-hosting with superpowers’ and is aiming to provide an alternative to Heroku and Vercel.

For the last several years, I’ve been using Ansible to deploy my projects, but I’ve been looking to simplify and get functionality like atomic deployments out of the box. Especially for side projects that I can quickly and easily deploy to DigitalOcean and Hetzner, a mechanism to package things up into container and atomically deploy them is really appealing.

Coolify lets you select to use Docker to build and run your app if you prefer - for some cases, this will be ideal, but I’m using Docker for local development but not looking to use things like Kubernetes in production.

I wasn’t able to find any concrete examples of using Laravel queues with Coolify, so I’ve written up how I’ve achieved this.

I’m using the default choice of Nixpacks to build the project. Nixpacks takes a directory, analyses it, and produces an OCI-compliant image, which Coolify then starts and hot-swaps if your configured health checks pass.

However, the default PHP provider will simply build php-fpm and nginx and point them to your Laravel directory. It doesn’t help get your background queues processing. For that, we want to run supervisord to run multiple background processes to handle your jobs.

Customising the PHP provider

I’ve added a nixpacks.toml file to my project root, which gets picked up by Nixpacks and lets you extend the default PHP provider.

I’ve based the nginx.template.conf file included below from the nixpacks PHP provider, and taken the php-fpm.conf again from the nixpacks PHP provider.

nixPkgs = ["...", "python311Packages.supervisor"]

cmds = [
    "mkdir -p /etc/supervisor/conf.d/",
    "cp /assets/laravel-worker.conf /etc/supervisor/conf.d/laravel-worker.conf",
    "cp /assets/supervisord.conf /etc/supervisord.conf",
    "chmod +x /assets/start.sh",

cmd = '/assets/start.sh'

"start.sh" = '''

# Transform the nginx configuration
node /assets/scripts/prestart.mjs /assets/nginx.template.conf /etc/nginx.conf

# Start PHP-FPM
php-fpm -y /assets/php-fpm.conf

# Start Supervisor
supervisord -c /etc/supervisord.conf

# Start Nginx
nginx -c /etc/nginx.conf

"supervisord.conf" = '''


supervisor.rpcinterface_factory = supervisor.rpcinterface:make_main_rpcinterface


files = /etc/supervisor/conf.d/*.conf

"laravel-worker.conf" = '''
command=php /app/artisan queue:work --sleep=3 --tries=3 --max-time=3600

"php-fpm.conf" = '''
listen =
user = www-data
group = www-data
listen.owner = www-data
listen.group = www-data
pm = dynamic
pm.max_children = 50
pm.min_spare_servers = 4
pm.max_spare_servers = 32
pm.start_servers = 18
clear_env = no

"nginx.template.conf" = '''
user www-data www-data;
worker_processes 5;
daemon off;

worker_rlimit_nofile 8192;

events {
  worker_connections  4096;  # Default: 1024

http {
    include    $!{nginx}/conf/mime.types;
    index    index.html index.htm index.php;

    default_type application/octet-stream;
    log_format   main '$remote_addr - $remote_user [$time_local]  $status '
        '"$request" $body_bytes_sent "$http_referer" '
        '"$http_user_agent" "$http_x_forwarded_for"';
    access_log /var/log/nginx-access.log;
    error_log /var/log/nginx-error.log;
    sendfile     on;
    tcp_nopush   on;
    server_names_hash_bucket_size 128; # this seems to be required for some vhosts

    server {
        listen ${PORT};
        listen [::]:${PORT};
        server_name localhost;

        $if(NIXPACKS_PHP_ROOT_DIR) (
            root ${NIXPACKS_PHP_ROOT_DIR};
        ) else (
            root /app;

        add_header X-Content-Type-Options "nosniff";

        client_max_body_size 35M;
        index index.php;
        charset utf-8;
        $if(IS_LARAVEL) (
            location / {
                try_files $uri $uri/ /index.php?$query_string;
        ) else ()
          location / {
            try_files $uri $uri/ ${NIXPACKS_PHP_FALLBACK_PATH}?$query_string;
        ) else ()
        location = /favicon.ico { access_log off; log_not_found off; }
        location = /robots.txt  { access_log off; log_not_found off; }
        $if(IS_LARAVEL) (
            error_page 404 /index.php;
        ) else ()
        location ~ \.php$ {
            fastcgi_param SCRIPT_FILENAME $realpath_root$fastcgi_script_name;
            include $!{nginx}/conf/fastcgi_params;
            include $!{nginx}/conf/fastcgi.conf;

            fastcgi_param PHP_VALUE "upload_max_filesize=30M \n post_max_size=35M";
        location ~ /\.(?!well-known).* {
            deny all;

Going block-by-block


  • extend the default provider to also install supervisor, as well as php-fpm and nginx


  • make the configuration directory for supervisor
  • copy the Laravel worker configuration file into the directory
  • copy the overall supervisor config into the directory
  • ensure the start script is executable
  • ... gets replaced with the other, default build commands from the Nixpacks PHP provider


  • run the included start.sh script when the container gets launched by Coolify


This block contains all of the static assets that we want written into the /assets directory, which can then get copied and/or used.

  • start.sh is our startup script which transforms the nginx config file using the provided scripts from nixpacks’ default PHP provider.

  • supervisord.conf is the overall supervisor config file, which tells it to use a Unix socket instead of network (to ensure there are no port conflicts with multiple running containers while Coolify runs the healthchecks).

  • laravel-worker.conf is the worker config file to run the Laravel queues. You may wish to customise this command for non-default queues etc. Note: Your app is located in /app.

  • php-fpm.conf is the PHP-FPM config file.

  • nginx.template.conf gets transformed into the final nginx config file by nixpacks; this overrides their default to provide specific customisations, which you may wish to modify:

    • client_max_body_size 35M; sets the maximum POST body size
    • fastcgi_param PHP_VALUE "upload_max_filesize=30M \n post_max_size=35M"; passes through two php.ini settings to override the maximum allowed upload size

Checking the plan

To check the complete build plan that Nixpacks generates, you can run the following from your project root (assuming you have Nixpacks installed and running locally too):

nixpacks plan -f toml .

Don’t forget that Nixpacks will also install PHP extensions for you by analysing your composer.json file, so if you need PHP’s GD library you can simply add "ext-gd": "*", into your require block and GD will get installed and configured too.

Wrapping up

You could extend the above to add multiple workers controlled by supervisor easily by just adding another config file to the [staticAssets] block and copying it into the /etc/supervisor/conf.d/ directory in the build phase. The supervisor config is set to include all files in that directory already so no other modifications should be necessary.

So far I’m very happy with Coolify and looking forward to using it in larger projects, and hopefully this writeup is useful for others!