G'day:
Right, so yesterday's article covered building a database-driven scheduled task system that actually works with Symfony's scheduler. We got dynamic configuration, timezone handling, working days filtering, and a worker restart mechanism that lets users update task schedules through a web interface without redeploying anything.
All working perfectly. Job done. Time to move on to the next thing, right?
Well, not quite. Turns out we'd built a lovely scheduling system but forgotten to implement something rather important. And by "forgotten", I mean we'd built the UI for it, added the database columns for it, even written the template code to display it… but never actually wired up the logic to make it work.
Grumble.
The clue was in our DynamicTaskMessage entity. We had these fields:
#[ORM\Column(nullable: true)]
private ?DateTimeImmutable $scheduledAt = null;
#[ORM\Column(nullable: true)]
private ?DateTimeImmutable $executedAt = null;
#[ORM\Column(nullable: true)]
private ?int $executionTime = null;
#[ORM\Column(type: Types::TEXT, nullable: true)]
private ?string $lastResult = null;
And our task listing template dutifully displayed columns for "Scheduled At", "Last Executed", "Execution Time", and "Last Result". But when you looked at the actual interface, every single one of those columns just showed dashes or "Never". Because we'd never actually implemented the code to populate them when tasks ran.
We had execution tracking fields, but no execution tracking. Brilliant.
What followed was an afternoon of debugging that included seemingly every Doctrine gotcha, a timezone configuration cock-up that'll make you question your life choices, and at least three separate "oh for the love of god" moments. Claudia's been with me through the whole saga, so she knows exactly where I went wrong (spoiler: everywhere).
By the end of it, we had proper real-time execution tracking working, but the journey there was… educational. Let's just say if you've ever wondered how many different ways you can break Doctrine entity relationships, we found most of them.
What we forgot to implement
So what exactly had we forgotten? The execution tracking system. We'd built all the infrastructure for monitoring when tasks run, how long they take, and what happens when they execute, but never actually implemented the bit that does the monitoring.
The DynamicTaskMessage entity had fields for tracking execution data:
- scheduledAt - when the task is next due to run
- executedAt - when it last executed
- executionTime - how long it took in milliseconds
- lastResult - success message or error details
The web interface had columns for displaying all this information. Users could see a lovely table with headers for "Scheduled At", "Last Executed", "Execution Time", and "Last Result". But every single cell just showed dashes or "Never" because the fields never got populated.
Our AbstractTaskHandler was doing comprehensive logging to files:
$this->tasksLogger->info('Task started', [
'task_id' => $taskId,
'task_type' => $this->getTaskTypeFromClassName(),
'metadata' => $metadata
]);
// ... task execution ...
$this->tasksLogger->info('Task completed successfully', [
'task_id' => $taskId,
'task_type' => $this->getTaskTypeFromClassName()
]);
So we knew tasks were running and completing. The logs showed everything working perfectly. But none of that information was making it back to the database where users could actually see it.
Classic case of building the display before implementing the functionality. We'd designed the perfect execution monitoring interface for a system that didn't actually monitor executions.
The fix seemed straightforward enough: measure execution time in the AbstractTaskHandler, capture the result, and update the database record when tasks complete. How hard could it be?
Turns out, quite hard. Because implementing execution tracking properly meant diving head-first into every Doctrine gotcha you can imagine, plus a few timezone-related self-inflicted wounds that I'm still slightly embarrassed about.
Entity separation: fixing the fundamental design flaw
Before diving into the execution tracking implementation, we had a more fundamental problem to sort out. Our DynamicTaskMessage entity was trying to do two different jobs: store task configuration AND track execution history. That's a textbook violation of the single responsibility principle, and it was about to cause us some proper headaches.
The issue became obvious when we started thinking about updates. When someone changes a task's schedule through the web interface, that should trigger a worker restart so the new schedule takes effect. But when a task completes and we update its execution tracking data, that definitely shouldn't restart the worker - tasks complete every few seconds, and we'd end up in a constant restart loop.
But our TaskChangeListener was listening for any changes to DynamicTaskMessage entities:
#[AsDoctrineListener(event: Events::postUpdate)]
class TaskChangeListener
{
private function handleTaskChange($entity): void
{
if (!$entity instanceof DynamicTaskMessage) {
return;
}
// This triggers for EVERY change to the entity
$this->triggerWorkerRestart();
}
}
So updating execution tracking fields would trigger worker restarts. Not exactly what you want when tasks are completing every 30 seconds.
We had a few options for solving this:
- Make the listener smarter - only restart on actual schedule/config changes, not execution tracking updates
- Use field-level change detection in the listener to ignore execution tracking fields
- Use raw PDO to update execution tracking - bypass Doctrine entirely so no events fire
- Split the data - move execution tracking to a separate entity that doesn't trigger restarts
Option 4 was the cleanest solution: separate the concerns properly. Configuration data (what to run, when to run it) goes in one entity, execution tracking data (when it last ran, what happened) goes in another.
We created a new TaskExecution entity:
#[ORM\Entity(repositoryClass: TaskExecutionRepository::class)]
class TaskExecution
{
#[ORM\Id]
#[ORM\GeneratedValue]
#[ORM\Column]
private ?int $id = null;
#[ORM\OneToOne(targetEntity: DynamicTaskMessage::class, inversedBy: 'execution')]
#[ORM\JoinColumn(nullable: false)]
private ?DynamicTaskMessage $task = null;
#[ORM\Column(nullable: true)]
private ?DateTimeImmutable $nextScheduledAt = null;
#[ORM\Column(nullable: true)]
private ?DateTimeImmutable $executedAt = null;
#[ORM\Column(nullable: true)]
private ?int $executionTime = null;
#[ORM\Column(type: Types::TEXT, nullable: true)]
private ?string $lastResult = null;
// ... getters and setters
}
And cleaned up the DynamicTaskMessage entity to remove the execution tracking fields:
#[ORM\OneToOne(targetEntity: TaskExecution::class, mappedBy: 'task', cascade: ['persist', 'remove'])]
private ?TaskExecution $execution = null;
Now configuration changes to DynamicTaskMessage trigger worker restarts, but execution updates to TaskExecution don't. The TaskChangeListener only cares about the configuration entity, not the execution tracking entity.
Simple separation of concerns, but it solved a fundamental architectural problem before it became a runtime nightmare.
The Doctrine persist comedy: worked once, then mysteriously stopped
Right, with the entity separation sorted, time to implement the actual execution tracking. The plan was straightforward: update the AbstractTaskHandler to measure execution time, capture results, and update the TaskExecution record when tasks complete.
The implementation seemed simple enough:
public function execute(DynamicTaskMessage $task): void
{
$startTime = new DateTimeImmutable();
$executionStart = microtime(true);
try {
$result = $this->handle($task);
$executionTime = (int) round((microtime(true) - $executionStart) * 1000);
$this->updateTaskExecution($task, $startTime, $executionTime, $result);
} catch (Throwable $e) {
$executionTime = (int) round((microtime(true) - $executionStart) * 1000);
$this->updateTaskExecution($task, $startTime, $executionTime, 'ERROR: ' . $e->getMessage());
throw $e;
}
}
private function updateTaskExecution(DynamicTaskMessage $task, DateTimeImmutable $executedAt, int $executionTime, string $result): void
{
$execution = $task->getExecution();
if (!$execution) {
$execution = new TaskExecution();
$execution->setTask($task);
$this->entityManager->persist($execution);
}
$execution->setExecutedAt($executedAt);
$execution->setExecutionTime($executionTime);
$execution->setLastResult($result);
$this->entityManager->flush();
}
Tested it with a high-frequency task running every 30 seconds. First execution: worked perfectly. Database got updated with execution time, result, timestamp - everything looking good.
Second execution: nothing. Third execution: still nothing. The task was running fine (logs showed it completing successfully), but the execution tracking stopped updating after the first run.
Spent ages debugging this. Was the entity relationship broken? Was the message queue losing the task ID? Was there some race condition?
Nope. Just standard Doctrine behaviour that catches out everyone eventually.
The problem was in this bit:
if (!$execution) {
$execution = new TaskExecution();
$execution->setTask($task);
$this->entityManager->persist($execution); // Only persisting NEW entities
}
$execution->setExecutedAt($executedAt);
$execution->setExecutionTime($executionTime);
$execution->setLastResult($result);
$this->entityManager->flush(); // But not telling Doctrine about UPDATES
First execution: no TaskExecution record exists, so we create one and persist() it. Doctrine knows about the new entity and saves it on flush().
Second execution: TaskExecution record already exists, so we skip the persist() call. We update the entity properties, but never tell Doctrine that the entity has changed. So flush() does nothing.
Doctrine gotcha: you need to persist() both new entities AND entities with changes you want to save. The fix was dead simple:
$execution->setExecutedAt($executedAt);
$execution->setExecutionTime($executionTime);
$execution->setLastResult($result);
$this->entityManager->persist($execution); // Always persist, whether new or updated
$this->entityManager->flush();
One line addition. Hours of debugging. Sometimes the simplest bugs are the most frustrating.
The entity detachment disaster: message queues vs Doctrine
With the persist issue sorted, execution tracking was working perfectly. For about five minutes. Then we hit a new error that was even more mystifying:
"ERROR: A new entity was found through the relationship 'App\\Entity\\TaskExecution#task' that was not configured to cascade persist operations for entity: App\\Entity\\DynamicTaskMessage@513. To solve this issue: Either explicitly call EntityManager#persist() on this unknown entity or configure cascade persist this association in the mapping."
Right. So Doctrine was complaining that it didn't recognise the DynamicTaskMessage entity when we tried to update the TaskExecution record. But we'd literally just loaded that entity from the database. How could Doctrine not know about it?
The clue was in the error message: "unknown entity". The DynamicTaskMessage had become "detached" from Doctrine's Unit of Work - essentially, Doctrine had forgotten it was managing that entity.
But why? The entity came from our DynamicScheduleProvider, got passed into a TaskMessage, went through Symfony's message queue, and arrived at the task handler. Should be fine, right?
Wrong. Here's what actually happens:
- DynamicScheduleProvider loads entity from database - Doctrine knows about it
- Entity gets passed to TaskMessage constructor - still fine
- TaskMessage gets serialized for the message queue - entity becomes detached
- TaskMessage gets deserialized in the worker process - entity is no longer managed by Doctrine
- Task handler tries to update the entity - Doctrine has no idea what this object is
The serialization/deserialization cycle breaks Doctrine's entity management. The object still contains all the right data, but Doctrine doesn't recognise it as something it should be tracking.
We had a few options for fixing this:
- Use EntityManager::merge() to reattach the entity
- Fetch the execution record directly by task ID
- Re-fetch the task entity to get it back into managed state
We went with option three - re-fetch the entity in the updateTaskExecution method:
private function updateTaskExecution(DynamicTaskMessage $task, DateTimeImmutable $executedAt, int $executionTime, string $result): void
{
// Re-fetch to get managed entities after serialization/deserialization
$task = $this->entityManager->find(DynamicTaskMessage::class, $task->getId());
$execution = $task->getExecution();
if (!$execution) {
$execution = new TaskExecution();
$execution->setTask($task);
}
$execution->setExecutedAt($executedAt);
$execution->setExecutionTime($executionTime);
$execution->setLastResult($result);
$this->entityManager->persist($execution);
$this->entityManager->flush();
}
Simple fix: fetch the entity fresh from the database so Doctrine knows it's managing it again. Bit of extra database overhead, but it solves the detachment problem cleanly.
Another "message queues and ORMs don't always play nicely together" gotcha. The abstraction leaks, and you need to understand what's happening under the hood to fix it.
The timezone configuration tragedy: shooting yourself in the foot
Right, so execution tracking was finally working. Tasks were updating their execution records, timestamps were being stored, everything looked good. Except for one small problem: none of the scheduled tasks were actually running when they were supposed to.
I had a task configured to run between 20:03 and 20:59 every minute (using the cron expression 3-59 20 * * * in BST). It was currently 20:26 BST, well within that window, but the task wasn't executing. The scheduler showed it was loaded correctly, the next run time looked right, but nothing was happening.
Spent ages debugging this. Was the cron parsing broken? Was the timezone conversion wrong? Was there some issue with the working days logic?
Eventually checked the server timezone configuration:
# php -i | grep -i zone "Olson" Timezone Database Version => 2025.2 Timezone Database => internal Default timezone => Europe/London date.timezone => Europe/London => Europe/London
And there it was. PHP was configured to use Europe/London timezone, not UTC. Which means when our carefully crafted UTC-based scheduling system tried to work out what time it was, PHP was helpfully converting everything to BST.
The worst part? I'd configured this myself. In our Docker setup:
# docker/php/usr/local/etc/php/conf.d/app.ini
date.timezone = Europe/London
I'd literally set PHP to use London time, then spent hours debugging why UTC scheduling wasn't working. For the love of god.
The really embarrassing bit was that I had evidence staring me in the face the whole time. The execution tracking UI is displaing in "UTC" (1hr behind BST), but I didn't notice that when a task ran at - for example - 20:45 (BST), the UI was also saying 20:45. It was supposed to be in UTC, so shoulda read 19:45. I just didn't notice. Sigh. Fuck sake.
The fix was trivial:
# docker/php/usr/local/etc/php/conf.d/app.ini
date.timezone = UTC
One line change. Hours of debugging. Sometimes the most frustrating bugs are the ones you've inflicted on yourself.
The schedule conversion inconsistency: calculating next run times
With the timezone configuration fixed, tasks were finally running at the right times. But there was still one issue with the execution tracking: the "Next Scheduled" times were wrong.
A task running every minute from 20:03-20:59 BST would execute correctly, but then show its next scheduled time as something like 2025-08-12 21:03:00 - after the end of the run period, and apparently gonna run again in the next hour. By now I looked at it and went "timezone math cock-up", and that was right.
The problem was in how we calculated the next run time. Our calculateNextScheduledAt method was using the original schedule from the database:
private function calculateNextScheduledAt(DynamicTaskMessage $task, DateTimeImmutable $fromTime): DateTimeImmutable
{
$schedule = $task->getSchedule(); // Gets "3-59 20 * * *" from database
if ($this->scheduleFormatDetector->isCronExpression($schedule)) {
$cron = new CronExpression($schedule);
return DateTimeImmutable::createFromMutable($cron->getNextRunDate($fromTime));
}
// ... handle relative formats
}
But the actual scheduling was using the timezone-converted schedule. Our DynamicScheduleProvider was doing this:
$schedule = $this->scheduleTimezoneConverter->convertToUtc(
$task->getSchedule(), // "3-59 20 * * *" in BST
$task->getTimezone() // Europe/London
); // Results in "3-59 19 * * *" in UTC
So we were scheduling tasks with the UTC-converted expression but calculating next run times with the original BST expression. No wonder the times were wrong.
The fix was to apply the same timezone conversion in both places:
private function calculateNextScheduledAt(DynamicTaskMessage $task, DateTimeImmutable $fromTime): DateTimeImmutable
{
// Convert schedule to UTC just like DynamicScheduleProvider does
$schedule = $this->scheduleTimezoneConverter->convertToUtc(
$task->getSchedule(),
$task->getTimezone()
);
if ($this->scheduleFormatDetector->isCronExpression($schedule)) {
$cron = new CronExpression($schedule);
return DateTimeImmutable::createFromMutable($cron->getNextRunDate($fromTime));
}
// ... rest of method
}
Now both the scheduling and the next-run calculations use the same converted expression. The "Next Scheduled" times finally showed the correct values.
Another case of inconsistent logic between related parts of the system. The scheduling worked, the execution tracking worked, but they were using different interpretations of the same schedule data.
The working days UI shortfall: missing form controls
With execution tracking finally working properly, I decided to test the working days logic. We'd built this whole system for respecting UK bank holidays and weekends - tasks could be configured with workingDaysOnly to skip non-working days.
The logic was all there: WorkingDaysTrigger decorator, bank holiday integration, weekend detection. But when I went to test it by creating a task with working days enabled... the option wasn't in the form.
The entity had the field:
#[ORM\Column(nullable: false, options: ['default' => false])]
private ?bool $workingDaysOnly = false;
The template was displaying it in the task listing (well, it would have if any tasks had it enabled). But the DynamicTaskMessageType form was missing the field entirely.
So we had this sophisticated working days system that users couldn't actually configure through the interface. You could only enable it by editing database records directly, which rather defeats the point of building a user-friendly web interface.
The fix was straightforward - add the missing form field:
->add('workingDaysOnly', CheckboxType::class, [
'required' => false,
'label' => 'Working Days Only',
'help' => 'Skip weekends and bank holidays',
'data' => $options['data']->isWorkingDaysOnly() ?? false
])
But it needed to go in the right place in the form layout. The edit template was manually rendering each field, so adding it to the form builder wasn't enough - it also needed the corresponding template markup:
{{ form_widget(form.workingDaysOnly, {'attr': {'class': 'form-check-input'}}) }}
{{ form_label(form.workingDaysOnly, 'Working Days Only', {'label_attr': {'class': 'form-check-label'}}) }}
{{ form_errors(form.workingDaysOnly) }}
Skip weekends and bank holidays
And the task listing needed a column to show the current setting:
{% if task.workingDaysOnly %}
Yes
{% else %}
-
{% endif %}
A proper oversight. We'd implemented all the backend logic but forgotten to expose it through the user interface. Users had no way to actually configure the feature we'd spent time building.
Once the form controls were in place, testing the working days logic was trivial: add a fake bank holiday for today, toggle the setting on a task, watch it either run normally or skip to the next working day. Worked perfectly.
Final working solution: real-time execution tracking
Right, with all the debugging drama sorted, we finally had a complete execution tracking system working properly. Tasks were running on schedule, updating their execution records, and calculating correct next run times. The web interface showed real-time data about task performance and scheduling.
Here's what the final architecture looked like:
- Clean entity separation - DynamicTaskMessage for configuration, TaskExecution for tracking
- Template method pattern - AbstractTaskHandler handles all the execution tracking boilerplate
- Timezone consistency - Same conversion logic for scheduling and next-run calculations
- Entity re-fetching - Handles message queue serialization cleanly
- Complete UI integration - Users can configure everything through the web interface
The execution tracking captures everything you'd want to monitor:
- executedAt - when the task last ran
- executionTime - how long it took in milliseconds
- lastResult - success message or error details
- nextScheduledAt - when it's due to run next (calculated using proper timezone conversion)
The task listing shows it all in a sensible order: Last Executed | Execution Time | Last Result | Next Scheduled. Past execution data before future scheduling, which feels more natural than the other way around.
And the system handles edge cases properly. Cron ranges like 3-59 20 * * * correctly stop at the boundary - tasks run every minute from 20:03 to 20:59, then skip to 20:03 the next day. Working days logic respects both weekends and UK bank holidays. High-frequency tasks (every 30 seconds for testing) update their tracking data reliably without causing worker restart loops.
The logs show everything working smoothly:
[2025-08-12T19:42:00+00:00] tasks.INFO: Task started {"task_id":15,"task_type":"api_healthcheck","metadata":{"endpoints":["payment","shipping","crm"]}} [2025-08-12T19:42:00+00:00] tasks.INFO: Task completed successfully {"task_id":15,"task_type":"api_healthcheck"} [2025-08-12T19:44:00+00:00] tasks.INFO: Task started {"task_id":16,"task_type":"queue_monitor","metadata":{"maxDepth":1000,"alertThreshold":500}} [2025-08-12T19:44:00+00:00] tasks.INFO: Task completed successfully {"task_id":16,"task_type":"queue_monitor"}
No more execution tracking fields showing dashes or "Never". No more mysterious timezone offsets. No more missing persist calls or entity detachment errors. Just a working system that does what it says it will do.
Building the execution tracking system was harder than it should have been, mainly because we hit every possible Doctrine gotcha along the way. But the end result is genuinely useful - users can see at a glance which tasks are running properly, which ones are failing, and when everything is scheduled to happen next.
Sometimes the journey is more educational than the destination.
Claudia's summary: Learning from spectacular failures
Right, Adam's let me loose again to reflect on this whole debugging saga. What started as "just add some execution tracking" turned into a masterclass in everything that can go wrong when you're working with ORMs, message queues, and timezones.
The most striking thing about this exercise was how each bug felt completely mystifying until you understood the root cause, then became blindingly obvious in hindsight. The missing persist() call, the entity detachment through serialization, the timezone configuration self-sabotage - all of them followed the same pattern of "this should work, why doesn't it work, oh for the love of god of course that's why".
But the real lesson here isn't about any specific technical solution. It's about debugging methodology and knowing when to step back and question your assumptions. Adam spent hours debugging timezone issues because he was looking at the symptoms (tasks running at the wrong frequency) instead of the fundamentals (what timezone is PHP actually using). He was checking the arithmetic instead of questioning the setup.
The entity separation decision was probably the most important architectural choice we made. It would have been tempting to try clever workarounds - selective event listeners, field-level change detection, all sorts of hacky approaches. Instead, we stepped back and asked "what are we actually trying to model here?" Configuration and execution history are fundamentally different concerns, so they belong in different entities.
The Doctrine gotchas were educational too. The persist/flush cycle, entity detachment through serialization, relationship annotations - these are all things you learn by hitting them in practice, not by reading documentation. The fact that persist() worked perfectly for new entities but silently failed for updates is exactly the kind of subtle behaviour that catches people out.
What we ended up with is a proper monitoring system that gives users genuine insight into what their scheduled tasks are doing. Not just "task exists in database" but "task last ran at this time, took this long, and completed with this result". That's the difference between a toy system and something you'd actually want to use in production.
The timezone configuration comedy will go down in history as a perfect example of developer self-sabotage. Sometimes the most dangerous bugs are the ones you've inflicted on yourself through your own configuration choices.
Righto.
--
"Adam" [cough]