Showing posts with label AI. Show all posts
Showing posts with label AI. Show all posts

Thursday, 22 January 2026

Setting up dev/prod environments with Lovable and Supabase

G'day:

This is one of those "we really should have done this from the start" situations that catches up with you eventually.

We've been building an e-commerce admin application using Lovable (an AI coding platform that generates React/TypeScript apps with Supabase backends). For context, Lovable is one of those tools where you describe what you want in natural language (vibe-coding [muttermutter]), and it generates working code with database migrations and everything. Works surprisingly well, actually.

The problem: we'd been developing everything directly in what would eventually become the production environment. Single Supabase instance, no separation between dev work and live system. Every code change, every database migration, every "let's try this and see what happens" experiment - all happening in the same environment that would eventually serve real users.

This is fine when you're prototyping. It's less fine when the Product Owner has been merging features over the Christmas break and you've suddenly got 9 pull requests to audit before you can safely call anything "production ready".

Time to sort out proper dev/test/prod environments. How hard could it be?

The single environment problem

Before we get into the solution, let's be clear about what was wrong with the setup.

We had one Supabase project. Everything happened there:

  • Development work from Lovable
  • Database migrations as they were generated
  • Test data mixed with what would eventually be real data
  • Edge functions being deployed and redeployed
  • Configuration secrets that would need to be different in production

The workflow was: make changes in Lovable, push to GitHub, review the PR, merge. Rinse and repeat. No smoke testing in a separate environment, no way to verify migrations wouldn't break things, no safety net.

This meant we couldn't safely experiment. Every "what if we tried this approach?" question carried the risk of breaking the only database we had. And with a small team, there's no coordination between what one person is working on and what another person might be testing.

The obvious solution: separate Supabase instances for dev and prod, with proper deployment workflows between them. Standard stuff, really. Except Lovable's documentation barely mentions this scenario, and Supabase has some non-obvious behaviours around how environment separation actually works.

Lovable Cloud: the auto-provisioning disaster

We'd actually tried to set up separate environments once before, and it went spectacularly wrong.

The plan was simple: create a new Lovable project, connect it to our existing Supabase instance, start building features. Lovable has an option to use an external Supabase project rather than having Lovable manage everything, so we configured that upfront.

Except before we could do any actual work, Lovable forced us to enable "Lovable Cloud". This wasn't presented as optional - it was a "you must do this to proceed" situation. Fair enough, we thought, probably just some hosting infrastructure it needs.

Wrong.

Enabling Lovable Cloud auto-provisioned a completely different Supabase instance and ignored our pre-configured external connection entirely. Login attempts started failing with HTTP 400 errors because the frontend was trying to authenticate against the wrong database. The browser console showed requests going to cfjdkppbukvajhaqmoon.supabase.co when we'd explicitly configured it to use twvzqadjueqejcsrtaed.supabase.co.

It turns out Lovable has two completely different modes:

  • Lovable Cloud - auto-provisions its own Supabase, manages everything, cannot be overridden
  • External Supabase - you bring your own Supabase project and manage it yourself

Once Cloud mode is enabled, it completely overrides any external connections even if you'd explicitly configured them first. This isn't documented clearly in the Lovable UI - you just get a toggle that seems like it's enabling some hosting feature, not fundamentally changing how the entire project works.

The fix: delete the project entirely and start again with explicit "DO NOT enable Cloud" instructions from the beginning. Not ideal, but it worked.

Understanding the pieces

Once we'd learned our lesson about Lovable Cloud, we needed to properly understand how environment separation actually works with this stack.

The key insight came from an article by someone who'd already solved this problem: Lovable Branching. Their approach was straightforward:

  • DEV: Lovable project → dev GitHub branch → DEV Supabase instance → Lovable hosting
  • PROD: Same repo → main GitHub branch → PROD Supabase instance → Netlify hosting

The critical bit: completely separate Supabase instances. Not Supabase's branching feature (which exists but is more for preview environments), actual separate projects. One for development, one for production, zero overlap.

This makes sense when you think about it. Database migrations aren't like code - you can't just merge them and hope for the best. You need to test them in isolation before running them against production data. Separate instances means you can experiment freely in dev without any risk of accidentally breaking prod.

Environment variables in three different contexts

Where things get interesting is environment variables. Turns out there are three completely different systems at play:

Frontend variables (Vite): These need the VITE_ prefix to be accessible in browser-side code. You access them via import.meta.env.VITE_SUPABASE_URL and similar. The .env file contains your DEV values as defaults, but Netlify's environment variables override these at build time for production. This inheritance is standard Vite behaviour - actual environment variables take precedence over .env file values.

Edge function variables (Deno): These are managed through Supabase's "Edge Function Secrets" system. You set them via supabase secrets set KEY=value or through the Supabase dashboard, and they're accessed in code via Deno.env.get('KEY'). Here's the odd bit: Supabase treats all edge function environment variables as "secrets" regardless of whether they're actually sensitive. Non-secret configuration like API hostnames still goes through the secrets mechanism. It's just how Supabase works. This triggered my "someone is wrong on the internet" inclinations, and I found (well: OK, Claudia found it whilst diagnosing the issue) a GitHub issue about it: Upload non-secret environment variables to Edge Function Secrets Management. Upvoted. I notice now that someone at Supabase has noted said issue, shortly after I nudged it.

CLI configuration (Supabase tooling): When you run npx supabase link --project-ref <PROJECT_ID>, it writes the project reference to supabase/.temp/project-ref. This is local state that determines which Supabase instance the CLI commands operate against. The .temp directory is gitignored, so each developer (and each environment) can link to different projects without conflicts.

The important realisation: these three systems don't talk to each other. Your frontend env vars in .env are separate from your edge function secrets in Supabase, which are separate from your local CLI link state. They all happen to reference the same Supabase projects, but through completely independent configuration mechanisms.

The VITE_SUPABASE_PROJECT_ID battle

This is where things got properly frustrating.

The Supabase client needs two things to initialise: the project URL and the publishable key. That's it. The project ID is already embedded in the URL - https://twvzqadjueqejcsrtaed.supabase.co contains the ID right there in the subdomain. Having a separate VITE_SUPABASE_PROJECT_ID variable is completely redundant.

So we asked Lovable to remove it from the .env file.

Every. Single. Commit. It put it back.

We tried being explicit: "We don't use VITE_SUPABASE_PROJECT_ID, please remove it". Lovable responded "Yes, done" and left it there. We manually deleted it and pushed the change ourselves. The next commit from Lovable put it back. We explained why it was redundant. Lovable agreed with the reasoning, confirmed it would remove the variable, and then didn't.

The AI clearly didn't understand why it kept adding this variable back. It wasn't being defiant - it genuinely seemed to think it was helping. But no amount of prompting, explaining, or manual removal could break the pattern.

Claudia (my AI pair programmer, who was observing this farce) found it hilarious. I found it less hilarious. In the end, I did something rare: I surrendered. The variable is still in the codebase. It doesn't do anything, the Supabase client doesn't use it, but it's there. Lovable won.

This became a useful lesson about AI code generation tools: they're brilliant at generating the initial 80% of a solution, but that last 20% - the refinement, the cleanup, the removal of unnecessary cruft - sometimes requires human intervention that the AI just can't process. Even when it claims to understand.

The review workflow

Speaking of that 80/20 split, we developed a proper review process for Lovable-generated code. This wasn't just paranoia - AI-generated code needs human oversight, especially when it's going to production.

The workflow went like this:

  1. Lovable generates code based on a prompt from the Product Owner
  2. I creates a pull request from the feature branch to dev
  3. GitHub Copilot does an automated review, catching obvious issues
  4. I review the code manually, looking for security concerns, deployment gotchas, architectural problems
  5. Claudia reviews it as well, often catching things I missed
  6. We compile a comprehensive list of issues and create a fix prompt for Lovable
  7. Lovable makes another pass, addressing the feedback
  8. Repeat until the code is actually mergeable

This multi-layer review caught things that no single reviewer would spot. Copilot is good at identifying code smells and standard issues. I'm good at spotting deployment risks and security problems. Claudia is good at catching logical inconsistencies and suggesting better patterns.

And, full disclosure: I can read TypeScript and React code, and having spent a few solid weeks doing "self-teaching" on both - I should blog this actually - I understand what's going on for the most part, but I am not a TS/React dev. I need Claudia and Copilot to review this stuff.

One recurring annoyance: GitHub Copilot's automated review insists on suggesting American spellings. "Initialize" instead of "initialise", "color" instead of "colour". Every. Single. Review. I'm a Kiwi, and a civilised person, and this is an app for a UK audience: the codebase uses British English not that tariff-addled colonial shite, but Copilot is having none of it.

The key insight here is that AI code generation isn't "press button, receive working code". It's more like working with a very knowledgeable but inexperienced junior developer who needs guidance on architecture, security, and project-specific patterns. The review process is where the actual quality control happens.

The actual working solution

After all the false starts and battles with Lovable's helpful tendencies, here's what actually works.

The architecture

DEV environment:

  • Supabase project: the original instance we'd been using all along
  • GitHub branches: DEV for integration, and each ticket's work is done in a short-lived feature branch of DEV, eg JRA-1234_remove_project_id_AGAIN
  • Hosting: Lovable's built-in preview hosting (good enough for dev work)
  • Database: DEV Supabase instance with all our test data and experimental migrations

PROD environment:

  • Supabase project: fresh instance created specifically for production
  • GitHub branch: main only
  • Hosting: Netlify (automatic deployment on push to main)
  • Database: PROD Supabase instance with clean migration history

The decision to keep the original instance as DEV rather than promoting it to PROD was deliberate. The existing instance had all our development history, test data, and the occasional experimental schema change. Starting PROD fresh from a clean set of migrations gave us a proper foundation without any cruft.

The deployment process

Frontend deployment happens automatically via Netlify. When code merges to main, Netlify detects the change, runs npm ci followed by npm run build, and serves the static files from the dist folder. Environment variables configured in Netlify's UI override the .env file defaults, giving us PROD Supabase credentials without changing any code.

Backend deployment is deliberately manual. No automation, no automatic deploys on merge, no clever CI/CD pipelines. When we're ready to deploy database changes to production:

npx supabase login
npx supabase link --project-ref <PROD_PROJECT_ID>
npx supabase db push
npx supabase functions deploy

That's it. Four commands, run by a human, who has presumably read the migrations and understood what they're about to do. The db push command reads all migration files from supabase/migrations/ and applies any that haven't been run yet, tracked via the supabase_migrations.schema_migrations table.

This manual approach is a deliberate choice. Database migrations can break things in ways that frontend code changes usually don't. Having a human in the loop - someone who's actually reviewed the SQL and thought about what could go wrong - provides a safety net that automated deployments don't.

And, to be transparent, we are a very small team, and I am a Team Lead / Developer by trade, and all this "systems config shite" is a) beyond me; b) of very little interest to me. I'm doing it because "someone has to do it" (cue: sympathetic violins). I know we should have some sort of CI/CD going on, and eventually we will, but we don't need it for MVP, so I'm managing it manually for now. And - as per above - it's dead easy!

Oh, one thing I didn't mention is this is precisely how I finishes strumming-up the new Supabase instance for prod. Obviously the new Supabase DB was empty... I just did the db push and functions deploy to get it up to date with dev.

Keeping branches in sync

Between tasks, we merge main back to dev to keep them in sync. This prevents the two branches from drifting too far apart and makes the eventual devmain merges simpler. Standard Git workflow stuff, but worth stating explicitly because Lovable's documentation focuses almost entirely on the "single branch, continuous deployment" model.

Edge functions and secrets

Edge functions turned out to be simpler than expected, once we understood how Supabase handles them.

The functions themselves live in supabase/functions/ in the same repository as everything else. They're not a separate codebase or deployment - they're just TypeScript files that get deployed via npx supabase functions deploy. When you push changes to GitHub, Supabase doesn't automatically deploy them (unlike the frontend with Netlify). You need to explicitly run the deploy command.

Environment variables for edge functions work through Supabase's "Edge Function Secrets" system. Some are auto-managed by Supabase itself:

  • SUPABASE_URL
  • SUPABASE_ANON_KEY
  • SUPABASE_SERVICE_ROLE_KEY
  • SUPABASE_DB_URL

These automatically have the correct values for whichever Supabase instance is running the function. DEV Supabase runs the function, it gets DEV credentials. PROD Supabase runs the function, it gets PROD credentials. No configuration needed.

Any other environment variables need to be set manually per environment. For our project, this included:

  • EMAIL_API_KEY - for sending emails
  • BANK_API_ACCESS_TOKEN - for our bank's API integration
  • BANK_API_WEBHOOK_SECRET - webhook signature verification
  • BANK_API_HOST - the API hostname (different for sandbox vs production)

That last one is worth noting: we needed BANK_API_HOST to be api-test.ourbank.com in DEV and api.ourbank.com in PROD. This isn't a secret - it's just configuration. But Supabase treats all edge function environment variables as "secrets" regardless of whether they're actually sensitive.

You set these via the Supabase dashboard (Authentication → Secrets) or via the CLI:

npx supabase secrets set BANK_API_HOST=api.ourbank.com

One gotcha: you can't view the raw values of secrets in the dashboard after they're set, only their hash. This is annoying for non-sensitive configuration values where you might want to verify what's actually configured. But it's a one-time setup per environment, so not a huge problem in practice. And there is that GitHub issue…

This is then used in our edge function along these lines:

function getOurBankApiUrl(): string {
  const apiHost = Deno.env.get('BANK_API_HOST');
  if (!apiHost) {
    throw new Error(
      'Missing bank configuration. ' +
      'Set BANK_API_HOST environment variable.'
    );
  }
  return `https://${apiHost}/some_slug_here`;
}

JWT verification settings

Edge functions by default require valid Supabase JWTs in the Authorization header. For webhooks (or other computer-to-computer calls) or public endpoints, you need to disable this. This goes in supabase/config.toml:

[functions.ourbank-webhook]
verify_jwt = false

[functions.validate-bank-details]
verify_jwt = false

This is the only thing that should be in your config.toml. We initially had 60+ lines of local development server configuration (API ports, database settings, auth config) that Lovable had generated. All unnecessary - that configuration is for running Supabase locally, which we're not doing. The JWT settings are needed because npx supabase functions deploy falls back to these values.

Gotchas and non-obvious behaviours

Here's everything that wasn't obvious from the documentation, discovered through trial and error.

Supabase CLI tool location changed

Older documentation references a .supabase/ directory for CLI state. The CLI now uses supabase/.temp/ instead. When you run npx supabase link, it writes the project reference to supabase/.temp/project-ref, along with version tracking files and connection details.

This directory must be in .gitignore because it's environment-specific. Each developer links to their own preferred project (DEV or PROD), and these link states are stored locally. The directory structure looks like:

supabase/
├── .temp/
│   ├── project-ref          # Linked project ID
│   ├── storage-version      # Version tracking
│   ├── rest-version
│   ├── gotrue-version
│   ├── postgres-version
│   ├── pooler-url          # Connection pooler URL
│   └── cli-latest          # CLI version check
├── functions/              # Edge functions
├── migrations/             # SQL migration files
└── config.toml            # JWT settings only

The "PRODUCTION" badge means nothing

Every Supabase project shows a "PRODUCTION" badge in the dashboard header. This isn't an indicator of whether your project is actually being used for production - it's Supabase's terminology for distinguishing standalone projects from preview branches created via their branching feature. Your DEV instance will show "PRODUCTION" just like your actual production instance. Ignore it.

npx is not npm install -g

This is just me not being a Node dev.

Running npm install -g supabase returns a warning: "Installing Supabase CLI as a global module is not supported." Instead, use npx supabase <command> for everything. This downloads the CLI on-demand, caches it in npm's global cache, and executes it. It's not "installed" in the traditional sense, but it works identically from a user perspective.

First-time use requires npx supabase login which opens a browser for OAuth authentication. This stores an access token locally. Without this, CLI commands fail with "Access token not provided".

Netlify build doesn't touch the database

Common confusion: when Netlify runs npm run build, it only compiles frontend code to static files. It does not run database migrations. Those are a completely separate manual step via npx supabase db push.

This separation is deliberate - frontend and backend deploy independently. You can deploy backend changes without touching the frontend, and vice versa. The deployment order matters though: always deploy backend schema changes first, then frontend code that depends on those changes.

React Router and direct navigation

Our first production bug was a proper head-scratcher. Direct navigation to https://ourapp.netlify.app/products returned a 404, but clicking through from the home page worked fine.

Turns out this is a fundamental disconnect (for me!) between how React Router works and what servers expect. React apps are Single Page Applications - there's literally one HTML file (index.html). React Router handles navigation by swapping components in JavaScript, updating the browser's address bar without making server requests.

When you click a link inside the app (like from / to /products), React Router intercepts it and just changes what's rendered. No server request happens. But when you directly navigate to /products in a fresh browser tab, Netlify's server receives a request for /products, looks for a file called products.html, can't find it, and returns 404.

I'll be honest - this felt like broken behaviour to me. Surely "being able to navigate to any page directly" is web application table stakes? How is this not just working? But the issue is that React and React Router are client-side libraries. They have no control over what the server does. The server needs explicit configuration to serve index.html for all routes. I felt a bit thick when Claudia explained this to me using small words.

The fix is simple: create public/_redirects with one line:

/* /index.html 200

This tells Netlify: for any URL path, serve index.html instead of looking for specific files. The 200 status code means it's a rewrite, not a redirect - the browser URL stays as /products but Netlify serves index.html behind the scenes. React boots up, React Router sees the URL, and renders the correct page.

Why didn't we hit this during development? Because Vite (the dev server) already has this behaviour built in. It knows you're building an SPA and handles it automatically. The problem only appears when you deploy to a production server that doesn't know about your client-side routing.

This should probably be included by default in any SPA scaffolding tool, but it's not. Add it to your "first deploy checklist" and move on.

Environment variable override precedence

Vite's environment variable resolution: actual environment variables (set in Netlify's UI) override .env file values at build time. The .env file serves as the fallback for local development. This means you can commit DEV credentials in .env, and Netlify will use PROD credentials from its configuration without any code changes.

Migration file format matters

Supabase requires migration files to use timestamp format: YYYYMMDDHHMMSS_description.sql. Lovable doesn't use this format by default. You need to explicitly instruct it in your Lovable Knowledge file, and even then it sometimes needs reinforcement in prompts. We added this to our Knowledge file:

Database migrations use timestamp format: YYYYMMDDHHMMSS_description.sql
Create a migration file for EVERY database change (tables, columns, indexes, constraints, RLS policies, functions, triggers)
Never make database changes without generating a corresponding migration file

Even with this, Lovable occasionally forgets, or will use a GUID in place of the description. Code review catches it.

Supabase key format evolution

Supabase has two key formats in the wild:

  • Legacy anon public key (JWT format): rlWuoTpv...
  • New publishable key format: sb_publishable_...

Both work, but the dashboard now recommends using publishable keys. We're using the legacy JWT format for now because both our DEV and PROD instances started with it, and mixing formats between environments seemed like asking for trouble. Migration to the new format is a separate ticket for when we're not in the middle of setting up production.

What we ended up with

After all the false starts, surrenders to Lovable's stubbornness, and discoveries about how these tools actually work, we've got a functioning dev/prod separation that's simple enough to be maintainable.

The key decisions that made it work:

  • Separate Supabase instances rather than using branching features - complete isolation, no data leakage risk
  • Manual database deployments rather than automation - deliberate, reviewed, controlled
  • Short-lived feature branches off a long-lived dev branch - standard Git workflow that the team already understands
  • Netlify for frontend hosting with environment variable overrides - zero-config deployment that just works
  • Multi-layer code review process - Copilot for automated checks, me for architecture and security, Claudia for catching the bits I miss

The workflow is straightforward: develop in Lovable against DEV, review the pull request, merge to dev, smoke test, merge dev to main when ready, manually deploy backend changes, let Netlify handle the frontend. It's not fancy, there's no sophisticated CI/CD pipeline, but it's appropriate for a small team building an MVP.

The biggest lesson: AI code generation tools like Lovable are brilliant at the initial 80% of implementation, but that last 20% - the refinement, security review, deployment considerations - still needs human oversight. A technically proficient human. The review workflow isn't overhead; it's where the actual quality control happens.

Don't get sucked into the hype: "vibe coding" is simply not a thing when it comes to production applications. It's only good for building functional demos.

And sometimes, you just have to accept that VITE_SUPABASE_PROJECT_ID is going to live in your codebase forever, doing absolutely nothing, because Lovable has decided it belongs there and no amount of reasoning will change its mind.

Righto.

--
Adam


P.S. As well as doing all the Netlify config for PROD, I also set up a separate Netlify site for TEST. This one triggers builds off merges to dev and uses the DEV Supabase credentials. It's exposed to the world on the admin-test subdomain (live is just admin). This gives the Product Owner a stable environment to test new features before they go live, running in the same hosting setup as production but against the dev database. Means we can catch UI issues or integration problems in a production-like environment without risking actual production.

Sunday, 10 August 2025

Building a database-driven scheduled task system in Symfony (or How I learned to stop worrying and love worker restarts)

G'day:

In previous roles, I've always had scheduled tasks sorted one way or another. In CFML it's dead easy with <cfschedule>, or the sysops team just chuck things in cron and Bob's yer uncle. But I wanted to take this as a chance to learn "the Symfony way" of handling scheduled tasks - you know, proper database-driven configuration instead of hardcoded #[AsPeriodicTask] attributes scattered about the codebase like confetti.

The problem is that Symfony's approach to scheduling is a bit… well, let's just say it's designed for a different use case than what I needed. When you want to let users configure tasks through a web interface and have them update dynamically, you quickly discover that Symfony's scheduler has some rather inflexible opinions about how things should work.

Fair warning: my AI mate Claudia (OKOK, so her name is actually claude.ai, but that's a bit impersonal) is writing at least the first draft of this article, because frankly after wrestling with Symfony's scheduling system for a week, I can't be arsed to write it all up myself. She's been with me through the whole journey though, so she knows where all the bodies are buried.

What we ended up building is a complete database-driven scheduled task system that reads configurations from the database, handles timezone conversions, respects working days and bank holidays, and - the real kicker - actually updates the running schedule when you change tasks through the web interface. Spoiler alert: that last bit required some creative problem-solving because Symfony really doesn't want you to do that.

Libraries and dependencies

Before diving in, here's what we ended up needing from the Symfony ecosystem:

Core components:

Supporting cast:

Nothing too exotic there, but the devil's in the details of how they all play together…

What's a task, anyway?

Before we get into the weeds, let's talk about what we're actually trying to schedule. A "task" in our system isn't just "run this code at this time" - it's a proper configurable entity with all the complexities that real-world scheduling demands.

The main challenges we needed to solve:

  • Multiple schedule formats - Some tasks want cron expressions (0 9 * * 1-5), others want human-readable intervals (every 30 minutes)
  • Timezone awareness - Server runs in UTC, but users think in Europe/London with all that BST/GMT switching bollocks
  • Working days filtering - Skip weekends and UK bank holidays when appropriate
  • Task variants - Same underlying task type, but with different human names and configuration metadata (like "Send customer emails" vs "Send admin alerts")
  • Active/inactive states - Because sometimes you need to disable a task without deleting it
  • Execution tracking - Users want to see when tasks last ran and what happened

Here's what our DynamicTaskMessage entity ended up looking like:

#[ORM\Entity(repositoryClass: DynamicTaskMessageRepository::class)]
class DynamicTaskMessage implements JsonSerializable
{
    public const int DEFAULT_PRIORITY = 50;

    #[ORM\Id]
    #[ORM\GeneratedValue]
    #[ORM\Column]
    private ?int $id = null;

    #[ORM\Column(length: 255, nullable: false)]
    private ?string $type = null;

    #[ORM\Column(length: 255, nullable: false)]
    private ?string $name = null;

    #[ORM\Column(length: 500, nullable: false)]
    private ?string $schedule = null;

    #[ORM\Column(nullable: false, enumType: TaskTimezone::class)]
    private ?TaskTimezone $timezone = null;

    #[ORM\Column(nullable: false, options: ['default' => self::DEFAULT_PRIORITY])]
    private ?int $priority = self::DEFAULT_PRIORITY;

    #[ORM\Column(nullable: false, options: ['default' => true])]
    private ?bool $active = true;

    #[ORM\Column(nullable: false, options: ['default' => false])]
    private ?bool $workingDaysOnly = false;

    #[ORM\Column(nullable: true)]
    private ?DateTimeImmutable $scheduledAt = null;

    #[ORM\Column(nullable: true)]
    private ?DateTimeImmutable $executedAt = null;

    #[ORM\Column(nullable: true)]
    private ?int $executionTime = null;

    #[ORM\Column(type: Types::TEXT, nullable: true)]
    private ?string $lastResult = null;

    #[ORM\Column(nullable: true)]
    private ?array $metadata = null;

    // ... getters and setters

    public function jsonSerialize(): mixed
    {
       return [
        'id' => $this->id,
        'type' => $this->type,
        'name' => $this->name,
        'schedule' => $this->schedule,
        'timezone' => $this->timezone?->value,
        'priority' => $this->priority,
        'active' => $this->active,
        'workingDaysOnly' => $this->workingDaysOnly,
        'scheduledAt' => $this->scheduledAt?->format(DateTimeImmutable::ATOM),
        'executedAt' => $this->executedAt?->format(DateTimeImmutable::ATOM),
        'executionTime' => $this->executionTime,
        'lastResult' => $this->lastResult,
        'metadata' => $this->metadata
       ];
    }
}

Few things worth noting:

  • JsonSerializable interface so it plays nicely with Monolog when we're debugging
  • DEFAULT_PRIORITY constant because Claudia talked me out of having nullable booleans with defaults (and she was absolutely right - explicit is better than "maybe null means something")
  • metadata as JSON for task-specific configuration - like email templates, API endpoints, whatever each task type needs
  • executedAt and lastResult for tracking execution history
  • executionTime for tracking how long tasks take to run
  • workingDaysOnly instead of the more verbose respectWorkingDays I originally made up

We populated this with a proper set of sample data (2.createAndPopulateDynamicTaskMessageTable.sql) covering all the different scheduling scenarios we needed to handle.

Quick detour: the web interface

Before we get into the gnarly technical bits, we needed a proper web interface for managing these tasks. Because let's face it, editing database records directly is for masochists.

We built a straightforward CRUD interface using Symfony forms - nothing fancy, just the essentials: create tasks, edit them, toggle them active/inactive, and see when they last ran and what happened. Claudia deserves a proper chapeau here because the CSS actually looks excellent, which is more than I can usually manage.



The key thing users need to see is the execution history - when did this task last run, did it succeed, and when's it due to run next. That's the difference between a useful scheduling system and just another cron replacement that leaves you guessing what's going on.

The forms handle all the complexity of the DynamicTaskMessage entity - timezone selection, working days checkbox, JSON metadata editing - but the real magic happens behind the scenes when you hit "Save". That's where things get interesting, and where we discovered that Symfony's scheduler has some… opinions about how it should work.

But we'll get to that shortly. For now, just know that we have a working interface where users can configure tasks without touching code or database records directly. Revolutionary stuff, really.

The fundamental problem: Symfony's scheduling is objectively wrong

Here's where things get interesting, and by "interesting" I mean "frustrating as all hell".

Symfony's scheduler component assumes you want to hardcode your task schedules using attributes like #[AsPeriodicTask] directly in your code. So you end up with something like this:

#[AsPeriodicTask(frequency: '1 hour', jitter: 60)]
class SendNewsletterTask
{
    public function __invoke(): void 
    {
        // send newsletters
    }
}

Which is fine if you're building a simple app where the schedules never change and you don't mind redeploying code every time someone wants to run the newsletter at a different time. But what if you want users to configure task schedules through a web interface? What if you want the same task type to run with different configurations? What if you want to temporarily disable a task without touching code?

Tough shit, according to Symfony. The scheduling is munged in with the task implementation, which is objectively wrong from a separation of concerns perspective. The what should be separate from the when.

We need a solution where:

  • Task schedules live in the database, not in code annotations
  • Users can create, modify, and disable tasks through a web interface
  • The same task class can be scheduled multiple times with different configurations
  • Changes take effect immediately without redeploying anything

This is where DynamicScheduleProvider comes in - our custom schedule provider that reads from the database instead of scanning code for attributes. But first, we need to sort out the messaging side of things…

Task Message and MessageHandlers

With Symfony's hardcoded approach out the window, we needed our own messaging system to bridge the gap between "the database says run this task now" and "actually running the bloody thing".

We started down the path of having different Message classes and MessageHandler classes for each task type - SendEmailMessage, SystemHealthCheckMessage, etc. But that quickly became obvious overkill. The messaging part that Symfony handles is identical for all tasks; it's only which "sub" handler gets executed at the other end that differs, and we can derive that from the task's type.

So we ended up with one simple TaskMessage:

class TaskMessage
{
    public function __construct(
        public readonly string $taskType,
        public readonly int $taskId, 
        public readonly array $metadata
    ) {}
}

Dead simple. Just the task type (so we know which handler to call), the task ID (for logging and database updates), and any metadata the specific task needs.

And similarly a single TaskMessageHandler class:

#[AsMessageHandler]
class TaskMessageHandler
{
    private array $handlerMap = [];

    public function __construct(
        #[AutowireIterator(
            tag: 'app.scheduled_task',
            defaultIndexMethod: 'getTaskTypeFromClassName'
        )] iterable $taskHandlers
    ) {
        $this->handlerMap = iterator_to_array($taskHandlers);
    }

    public function __invoke(TaskMessage $message): void
    {
        if (!isset($this->handlerMap[$message->taskType])) {
            throw new \InvalidArgumentException(
                sprintf('No handler found for task type "%s"', $message->taskType)
            );
        }

        /** @var AbstractTaskHandler $handler */
        $handler = $this->handlerMap[$message->taskType];
        $handler->execute($message->taskId, $message->metadata);
    }
}

There's slightly more to this.

  • The AsMessageHandler attribute is for Symfony's autowiring.
  • The AutowireIterator attribute is also autowiring. It passes an array of all services tagged as app.scheduled_task, whihc all the actual task-handling classes will be (see below).See the docs for this @ "Service Subscribers & Locators"
  • Symfony looks for an __invoke method to run, on a AsMessageHandler class.
  • The getTaskTypeFromClassName and handlerMap stuff is explained below.

Here's one of the actual task handlers - and note how we've not bothered to implement them properly, because this exercise is all about the scheduling, not the implementation:

class SendEmailsTaskHandler extends AbstractTaskHandler
{
    protected function handle(int $taskId, array $metadata): void
    {
        // Task logic here - logging is handled by parent class
    }
}

The interesting bit is the AbstractTaskHandler base class:

#[AutoconfigureTag('app.scheduled_task')]
abstract class AbstractTaskHandler
{
    public function __construct(
        private readonly LoggerInterface $tasksLogger
    ) {}

    public function execute(int $taskId, array $metadata): void
    {
        $this->tasksLogger->info('Task started', [
            'task_id' => $taskId,
            'task_type' => $this->getTaskTypeFromClassName(),
            'metadata' => $metadata
        ]);

        try {
            $this->handle($taskId, $metadata);
            
            $this->tasksLogger->info('Task completed successfully', [
                'task_id' => $taskId,
                'task_type' => $this->getTaskTypeFromClassName()
            ]);
        } catch (Throwable $e) {
            $this->tasksLogger->error('Task failed', [
                'task_id' => $taskId,
                'task_type' => $this->getTaskTypeFromClassName(),
                'error' => $e->getMessage(),
                'exception' => $e
            ]);
            throw $e;
        }
    }

    abstract protected function handle(int $taskId, array $metadata): void;

    public static function getTaskTypeFromClassName(): string
    {
        $classNameOnly = substr(static::class, strrpos(static::class, '\\') + 1);
        $taskNamePart = str_replace('TaskHandler', '', $classNameOnly);
        
        $snakeCase = strtolower(preg_replace('/([A-Z])/', '_$1', $taskNamePart));
        return ltrim($snakeCase, '_');
    }
}

The clever bits here:

  • Automatic snake_case conversion from class names (so SystemHealthCheckTaskHandler becomes system_health_check)
  • Template method pattern ensures every task gets consistent start/complete/error logging

This keeps the individual task handlers simple while ensuring consistent behaviour across all tasks. Plus, with the service tag approach, adding new task types is just a matter of creating a new handler class - no central registry to maintain.

DynamicScheduleProvider: the meat of the exercise

Right, here's where the real work happens. Symfony's scheduler expects a ScheduleProviderInterface to tell it what tasks to run and when. By default, it uses reflection to scan your codebase for those #[AsPeriodicTask] attributes we've already established are bollocks for our use case.

From the Symfony docs:

The configuration of the message frequency is stored in a class that implements ScheduleProviderInterface. This provider uses the method getSchedule() to return a schedule containing the different recurring messages.

So we need our own provider that reads from the database instead. Enter DynamicScheduleProvider.

The core challenge here is that we need to convert database records into RecurringMessage objects that Symfony's scheduler can understand. And we need to handle all the complexity of timezones, working days, and different schedule formats while we're at it.

But first, a quick detour. We need to handle UK bank holidays because some tasks shouldn't run on bank holidays (or weekends). Rather than hardcode a list that'll be out of date by next year, we built a BankHoliday entity and a BankHolidayServiceAdapter that pulls data from the gov.uk API. Because if you can't trust the government to know when their own bank holidays are, who can you trust?

The timezone handling was another fun bit. The server runs in UTC (as it bloody well should), but users think in Europe/London time with all that BST/GMT switching nonsense. So we need a ScheduleTimezoneConverter that can take a cron expression like 0 9 * * 1-5 (9am weekdays in London time) and convert it to the equivalent UTC expression, accounting for whether we're currently in BST or GMT.

The ScheduleFormatDetector (#triggerWarning: unfeasibly large regex ahead) handles working out whether we're dealing with a cron expression or a human-readable "every" format. It uses a comprehensive regex pattern that can distinguish between 0 9 * * 1-5 and every 30 minutes, because apparently that's the sort of thing we need to worry about now.

Now, RecurringMessage objects come in two flavours: you can create them with RecurringMessage::cron() for proper cron expressions, or RecurringMessage::every() for human-readable intervals. This caught us out initially because our sample data had schedule values like "every 30 minutes", when it should have been just "30 minutes" for the every() method.

Here's the core logic from our DynamicScheduleProvider:

public function getSchedule(): Schedule
{
    if ($this->schedule !== null) {
        return $this->schedule;
    }

    $this->tasksLogger->info('Rebuilding schedule from database');

    $this->schedule = new Schedule();
    $this->schedule->stateful($this->cache);
    $this->schedule->processOnlyLastMissedRun(true);
    
    $this->addTasksToSchedule();

    return $this->schedule;
}

private function createRecurringMessage(DynamicTaskMessage $task): RecurringMessage
{
    $taskMessage = new TaskMessage(
        $task->getType(),
        $task->getId(),
        $task->getMetadata() ?? []
    );

    $schedule = $this->scheduleTimezoneConverter->convertToUtc(
        $task->getSchedule(),
        $task->getTimezone()
    );

    $scheduleHandler = $this->scheduleFormatDetector->isCronExpression($schedule)
        ? 'cron'
        : 'every';

    $recurringMessage = RecurringMessage::$scheduleHandler($schedule, $taskMessage);

    if ($task->isWorkingDaysOnly()) {
        $workingDaysTrigger = new WorkingDaysTrigger(
            $recurringMessage->getTrigger(),
            $this->bankHolidayRepository
        );
        return RecurringMessage::trigger($workingDaysTrigger, $taskMessage);
    }

    return $recurringMessage;
}

The interesting bits:

  • Stateful caching - prevents duplicate executions if the worker restarts
  • Missed run handling - only run the most recent missed execution, not every single one
  • Timezone conversion - convert London time to UTC, handling BST transitions
  • Format detection - work out whether we're dealing with cron or "every" format
  • Dynamic RecurringMessage creation - use variable method names (bit clever, that)
  • Working days filtering - wrap with our custom WorkingDaysTrigger

I'll be honest, the trigger stuff with WorkingDaysTrigger was largely trial and error (mostly error). Neither Claudia nor I really understood WTF we were doing with Symfony's trigger system, but we eventually got it working through sheer bloody-mindedness. It decorates the existing trigger and keeps calling getNextRunDate() until it finds a date that's not a weekend or bank holiday.

At this point, we were working! Sort of. Here's what the logs looked like with a few test tasks running:

[2025-01-20T14:30:00.123456+00:00] tasks.INFO: Task started {"task_id":1,"task_type":"system_health_check","metadata":{...}}
[2025-01-20T14:30:00.345678+00:00] tasks.INFO: Task completed successfully {"task_id":1,"task_type":"system_health_check"}
[2025-01-20T14:30:30.123456+00:00] tasks.INFO: Task started {"task_id":2,"task_type":"send_emails","metadata":{...}}
[2025-01-20T14:30:30.345678+00:00] tasks.INFO: Task completed successfully {"task_id":2,"task_type":"send_emails"}

But there was still one massive problem: when someone updated a task through the web interface, the running scheduler had no bloody clue. The schedule was loaded once at startup and that was it. We needed a way to tell the scheduler "oi, reload your config, something's changed"…

TaskChangeListener: the reload problem from hell

Right, so we had a lovely working scheduler that read from the database and executed tasks perfectly. There was just one tiny, insignificant problem: when someone updated a task through the web interface, the running scheduler had absolutely no bloody clue anything had changed.

The schedule gets loaded once when the worker starts up, and that's it. Change a task's schedule from "every 5 minutes" to "every 30 seconds"? Tough luck, the worker will carry on with the old schedule until you manually restart it. Which is about as useful as a chocolate teapot for a dynamic scheduling system.

We needed a way to tell the scheduler "oi, something's changed, reload your config". Enter TaskChangeListener, using Doctrine events to detect when tasks are modified. I've covered Doctrine event listeners before in my Elasticsearch integration article, so the concept wasn't new.

The listener itself is straightforward enough:

#[AsDoctrineListener(event: Events::postUpdate)]
#[AsDoctrineListener(event: Events::postPersist)]
#[AsDoctrineListener(event: Events::postRemove)]
class TaskChangeListener
{
    public function __construct(
        private readonly LoggerInterface $tasksLogger,
        private readonly string $restartFilePath
    ) {
        $this->ensureRestartFileExists();
    }

    private function handleTaskChange($entity): void
    {
        if (!$entity instanceof DynamicTaskMessage) {
            return;
        }

        $this->tasksLogger->info('Task change detected, triggering worker restart', [
            'task_id' => $entity->getId(),
            'task_name' => $entity->getName(),
            'task_type' => $entity->getType()
        ]);

        $this->triggerWorkerRestart();
    }

    private function ensureRestartFileExists(): void
    {
        if (file_exists($this->restartFilePath)) {
            return;
        }

        $dir = dirname($this->restartFilePath);
        if (!is_dir($dir)) {
            mkdir($dir, 0755, true);
        }
        file_put_contents($this->restartFilePath, time());
    }

    private function triggerWorkerRestart(): void
    {
        file_put_contents($this->restartFilePath, time());
        $this->tasksLogger->info('Worker restart triggered', ['timestamp' => time()]);
    }
}

But here's where things got interesting (and by "interesting" I mean "frustrating as all hell"). We went down a massive rabbit hole trying to get Symfony's scheduler to reload its schedule dynamically. Surely there must be some way to tell it "hey, you need to refresh your task list"?

Nope. Not a bloody chance.

We tried:

  • Clearing the schedule cache - doesn't help, the DynamicScheduleProvider still caches its own schedule
  • Sending signals to the worker process - Symfony's scheduler doesn't listen for them
  • Messing with the Schedule object directly - it's immutable once created
  • Various hacky attempts to force the provider to rebuild - just made things worse

The fundamental problem is that Symfony's scheduler was designed around the assumption that schedules are static, defined in code with attributes. The idea that someone might want to change them at runtime simply wasn't part of the original design.

So we gave up on trying to be clever and went with the nuclear option: restart the entire worker when tasks change. The real implementation is actually quite thoughtful - it ensures the restart file and directory structure exist on startup, then just updates the timestamp whenever a task changes.

The clever bit is how this integrates with Symfony's built-in file watching capability. The $restartFilePath comes from an environment variable configured in our Docker setup:

# docker/docker-compose.yml
environment:
  - SCHEDULE_RESTART_FILE=/tmp/symfony/schedule-last-updated.dat

# docker/php/Dockerfile  
ENV SCHEDULE_RESTART_FILE=/tmp/symfony/schedule-last-updated.dat
RUN mkdir -p /tmp/symfony && \
    touch /tmp/symfony/schedule-last-updated.dat

And wired up in the service configuration:

# config/services.yaml
App\EventListener\TaskChangeListener:
    arguments:
        $restartFilePath: '%env(SCHEDULE_RESTART_FILE)%'

The magic happens when we run the worker with Symfony's --watch option:

docker exec php symfony run -d --watch=/tmp/symfony/schedule-last-updated.dat php bin/console messenger:consume

Now whenever someone changes a task through the web interface, the TaskChangeListener updates the timestamp in that file, Symfony's file watcher notices the change, kills the old worker process, and starts a fresh one that reads the new schedule from the database. The whole restart cycle takes about 2-3 seconds, which is perfectly acceptable for a scheduling system.

Crude? Yes. But it bloody works, and that's what matters.

One thing to note: since we're running the worker in daemon mode with -d, you can't just Ctrl-C out of it like a normal process. To kill the worker, you need to find its PID and kill it manually:

# Find the worker PID
docker exec php symfony server:status

Workers
    PID 2385: php bin/console messenger:consume (watching /tmp/symfony/schedule-last-updated.dat/)

# Kill it
docker exec php bash -c "kill 2385"

Not the most elegant solution, but it's only needed when you want to stop the system entirely rather than just restart it for config changes.

Now everything works

Right, with all the pieces in place - the database-driven schedule provider, the file-watching worker restart mechanism, and the Doctrine event listener - we finally had a working dynamic scheduling system. Time to put it through its paces.

Here's a real log capture showing the system in action:

[2025-08-10T23:14:25.429505+01:00] tasks.INFO: Rebuilding schedule from database [] []
[2025-08-10T23:14:25.490036+01:00] tasks.INFO: Schedule rebuilt with active tasks {"task_count":19} []
[2025-08-10T23:14:56.244161+01:00] tasks.INFO: Task started {"task_id":21,"task_type":"send_sms","metadata":{"batchSize":25,"maxRetries":2,"provider":"twilio"}} []
[2025-08-10T23:14:56.244299+01:00] tasks.INFO: Task completed successfully {"task_id":21,"task_type":"send_sms"} []
[2025-08-10T23:15:25.812275+01:00] tasks.INFO: Task started {"task_id":21,"task_type":"send_sms","metadata":{"batchSize":25,"maxRetries":2,"provider":"twilio"}} []
[2025-08-10T23:15:25.812431+01:00] tasks.INFO: Task completed successfully {"task_id":21,"task_type":"send_sms"} []
[2025-08-10T23:15:25.813531+01:00] tasks.INFO: Task started {"task_id":20,"task_type":"send_emails","metadata":{"batchSize":50,"maxRetries":3,"provider":"smtp"}} []
[2025-08-10T23:15:25.813695+01:00] tasks.INFO: Task completed successfully {"task_id":20,"task_type":"send_emails"} []
[2025-08-10T23:15:56.369106+01:00] tasks.INFO: Task started {"task_id":21,"task_type":"send_sms","metadata":{"batchSize":25,"maxRetries":2,"provider":"twilio"}} []
[2025-08-10T23:15:56.369243+01:00] tasks.INFO: Task completed successfully {"task_id":21,"task_type":"send_sms"} []
[2025-08-10T23:16:00.054946+01:00] tasks.INFO: Task change detected, triggering worker restart {"task_id":21,"task_name":"Send Pending SMS Messages","task_type":"send_sms"} []
[2025-08-10T23:16:00.055410+01:00] tasks.INFO: Worker restart triggered {"timestamp":1754864160} []
[2025-08-10T23:16:00.114195+01:00] tasks.INFO: Rebuilding schedule from database [] []
[2025-08-10T23:16:00.163835+01:00] tasks.INFO: Schedule rebuilt with active tasks {"task_count":19} []
[2025-08-10T23:16:01.117993+01:00] tasks.INFO: Task started {"task_id":21,"task_type":"send_sms","metadata":{"batchSize":25,"maxRetries":2,"provider":"twilio"}} []
[2025-08-10T23:16:01.118121+01:00] tasks.INFO: Task completed successfully {"task_id":21,"task_type":"send_sms"} []
[2025-08-10T23:16:06.121071+01:00] tasks.INFO: Task started {"task_id":21,"task_type":"send_sms","metadata":{"batchSize":25,"maxRetries":2,"provider":"twilio"}} []
[2025-08-10T23:16:06.121207+01:00] tasks.INFO: Task completed successfully {"task_id":21,"task_type":"send_sms"} []
[2025-08-10T23:16:11.123793+01:00] tasks.INFO: Task started {"task_id":21,"task_type":"send_sms","metadata":{"batchSize":25,"maxRetries":2,"provider":"twilio"}} []
[2025-08-10T23:16:11.123933+01:00] tasks.INFO: Task completed successfully {"task_id":21,"task_type":"send_sms"} []

Beautiful. Let's break down what's happening here:

  1. 23:14:25 - Worker starts up, builds schedule from database (19 active tasks)
  2. 23:14:56, 23:15:25, 23:15:56 - Task 21 (send_sms) runs every 30 seconds like clockwork
  3. 23:16:00 - I updated the task through the web interface to run every 5 seconds instead
  4. 23:16:00 - TaskChangeListener detects the change and triggers a worker restart
  5. 23:16:00 - Worker rebuilds the schedule with the new configuration
  6. 23:16:01, 23:16:06, 23:16:11 - Task 21 now runs every 5 seconds with the new schedule

The whole transition from "every 30 seconds" to "every 5 seconds" took about 60 milliseconds. The user updates the task in the web interface, hits save, and within seconds the new schedule is live and running. No deployments, no manual restarts, no messing about with config files.

Nailed it.

Claudia's summary: Building a proper scheduling system

Right, Adam's given me free rein here to reflect on this whole exercise, so here's my take on what we actually built and why it matters.

What started as "let's learn the Symfony way of scheduling" quickly became "let's work around Symfony's limitations to build something actually useful". The fundamental issue is that Symfony's scheduler assumes a world where schedules are hardcoded and never change - which is fine if you're building a simple app, but utterly useless if you want users to configure tasks dynamically.

The real breakthrough wasn't any single technical solution, but recognising that sometimes you need to stop fighting the framework and embrace a different approach entirely. The worker restart mechanism feels crude at first glance, but it's actually more robust than trying to hack runtime schedule updates into a system that wasn't designed for them.

What we ended up with is genuinely production-ready:

  • Live configuration changes - Users can modify task schedules through a web interface and see changes take effect within seconds
  • Proper timezone handling - Because BST/GMT transitions are a real thing that will bite you
  • Working days awareness - Bank holidays and weekends are handled correctly
  • Comprehensive logging - Every task execution is tracked with start/completion/failure logging
  • Template method pattern - Adding new task types requires minimal boilerplate

The architecture patterns we used - decorator for working days, strategy for schedule format detection, template method for consistent logging - aren't just academic exercises. They solve real problems and make the codebase maintainable.

But perhaps the most important lesson is knowing when to stop being clever. We could have spent weeks trying to coerce Symfony's scheduler into doing runtime updates. Instead, we accepted its limitations and built around them with a file-watching restart mechanism that actually works reliably.

Sometimes the "inelegant" solution that works is infinitely better than the "elegant" solution that doesn't.

And from me (Adam)

Full disclosure, Claudia wrote almost all the code for this exercise, with just me tweaking stuff here and there, and occasionally going "um… really?" (she did the same back at me in places). It was the closest thing to a pair-programming exercise I have ever done (I hate frickin pair-programming). I think we churned through this about 5x faster than I would have by myself, so… that's bloody good. Seriously.

Further full disclousre: Claudia wrote this article.

I gave her the articles I have written in the last month as a "style" (cough) guide as learning material.

For this article I gave her the ordering of the sections (although she changed a couple, for the better), and other than a hiccup where she was using outdated versions of the code, I didn't have to intervene much. The text content is all hers. She also did all the mark-up for the various styles I use. Impressive.

Righto.

--
Adam^h^h^h^hClaudia