Showing posts with label Supabase. Show all posts
Showing posts with label Supabase. Show all posts

Thursday, 22 January 2026

Setting up dev/prod environments with Lovable and Supabase

G'day:

This is one of those "we really should have done this from the start" situations that catches up with you eventually.

We've been building an e-commerce admin application using Lovable (an AI coding platform that generates React/TypeScript apps with Supabase backends). For context, Lovable is one of those tools where you describe what you want in natural language (vibe-coding [muttermutter]), and it generates working code with database migrations and everything. Works surprisingly well, actually.

The problem: we'd been developing everything directly in what would eventually become the production environment. Single Supabase instance, no separation between dev work and live system. Every code change, every database migration, every "let's try this and see what happens" experiment - all happening in the same environment that would eventually serve real users.

This is fine when you're prototyping. It's less fine when the Product Owner has been merging features over the Christmas break and you've suddenly got 9 pull requests to audit before you can safely call anything "production ready".

Time to sort out proper dev/test/prod environments. How hard could it be?

The single environment problem

Before we get into the solution, let's be clear about what was wrong with the setup.

We had one Supabase project. Everything happened there:

  • Development work from Lovable
  • Database migrations as they were generated
  • Test data mixed with what would eventually be real data
  • Edge functions being deployed and redeployed
  • Configuration secrets that would need to be different in production

The workflow was: make changes in Lovable, push to GitHub, review the PR, merge. Rinse and repeat. No smoke testing in a separate environment, no way to verify migrations wouldn't break things, no safety net.

This meant we couldn't safely experiment. Every "what if we tried this approach?" question carried the risk of breaking the only database we had. And with a small team, there's no coordination between what one person is working on and what another person might be testing.

The obvious solution: separate Supabase instances for dev and prod, with proper deployment workflows between them. Standard stuff, really. Except Lovable's documentation barely mentions this scenario, and Supabase has some non-obvious behaviours around how environment separation actually works.

Lovable Cloud: the auto-provisioning disaster

We'd actually tried to set up separate environments once before, and it went spectacularly wrong.

The plan was simple: create a new Lovable project, connect it to our existing Supabase instance, start building features. Lovable has an option to use an external Supabase project rather than having Lovable manage everything, so we configured that upfront.

Except before we could do any actual work, Lovable forced us to enable "Lovable Cloud". This wasn't presented as optional - it was a "you must do this to proceed" situation. Fair enough, we thought, probably just some hosting infrastructure it needs.

Wrong.

Enabling Lovable Cloud auto-provisioned a completely different Supabase instance and ignored our pre-configured external connection entirely. Login attempts started failing with HTTP 400 errors because the frontend was trying to authenticate against the wrong database. The browser console showed requests going to cfjdkppbukvajhaqmoon.supabase.co when we'd explicitly configured it to use twvzqadjueqejcsrtaed.supabase.co.

It turns out Lovable has two completely different modes:

  • Lovable Cloud - auto-provisions its own Supabase, manages everything, cannot be overridden
  • External Supabase - you bring your own Supabase project and manage it yourself

Once Cloud mode is enabled, it completely overrides any external connections even if you'd explicitly configured them first. This isn't documented clearly in the Lovable UI - you just get a toggle that seems like it's enabling some hosting feature, not fundamentally changing how the entire project works.

The fix: delete the project entirely and start again with explicit "DO NOT enable Cloud" instructions from the beginning. Not ideal, but it worked.

Understanding the pieces

Once we'd learned our lesson about Lovable Cloud, we needed to properly understand how environment separation actually works with this stack.

The key insight came from an article by someone who'd already solved this problem: Lovable Branching. Their approach was straightforward:

  • DEV: Lovable project → dev GitHub branch → DEV Supabase instance → Lovable hosting
  • PROD: Same repo → main GitHub branch → PROD Supabase instance → Netlify hosting

The critical bit: completely separate Supabase instances. Not Supabase's branching feature (which exists but is more for preview environments), actual separate projects. One for development, one for production, zero overlap.

This makes sense when you think about it. Database migrations aren't like code - you can't just merge them and hope for the best. You need to test them in isolation before running them against production data. Separate instances means you can experiment freely in dev without any risk of accidentally breaking prod.

Environment variables in three different contexts

Where things get interesting is environment variables. Turns out there are three completely different systems at play:

Frontend variables (Vite): These need the VITE_ prefix to be accessible in browser-side code. You access them via import.meta.env.VITE_SUPABASE_URL and similar. The .env file contains your DEV values as defaults, but Netlify's environment variables override these at build time for production. This inheritance is standard Vite behaviour - actual environment variables take precedence over .env file values.

Edge function variables (Deno): These are managed through Supabase's "Edge Function Secrets" system. You set them via supabase secrets set KEY=value or through the Supabase dashboard, and they're accessed in code via Deno.env.get('KEY'). Here's the odd bit: Supabase treats all edge function environment variables as "secrets" regardless of whether they're actually sensitive. Non-secret configuration like API hostnames still goes through the secrets mechanism. It's just how Supabase works. This triggered my "someone is wrong on the internet" inclinations, and I found (well: OK, Claudia found it whilst diagnosing the issue) a GitHub issue about it: Upload non-secret environment variables to Edge Function Secrets Management. Upvoted. I notice now that someone at Supabase has noted said issue, shortly after I nudged it.

CLI configuration (Supabase tooling): When you run npx supabase link --project-ref <PROJECT_ID>, it writes the project reference to supabase/.temp/project-ref. This is local state that determines which Supabase instance the CLI commands operate against. The .temp directory is gitignored, so each developer (and each environment) can link to different projects without conflicts.

The important realisation: these three systems don't talk to each other. Your frontend env vars in .env are separate from your edge function secrets in Supabase, which are separate from your local CLI link state. They all happen to reference the same Supabase projects, but through completely independent configuration mechanisms.

The VITE_SUPABASE_PROJECT_ID battle

This is where things got properly frustrating.

The Supabase client needs two things to initialise: the project URL and the publishable key. That's it. The project ID is already embedded in the URL - https://twvzqadjueqejcsrtaed.supabase.co contains the ID right there in the subdomain. Having a separate VITE_SUPABASE_PROJECT_ID variable is completely redundant.

So we asked Lovable to remove it from the .env file.

Every. Single. Commit. It put it back.

We tried being explicit: "We don't use VITE_SUPABASE_PROJECT_ID, please remove it". Lovable responded "Yes, done" and left it there. We manually deleted it and pushed the change ourselves. The next commit from Lovable put it back. We explained why it was redundant. Lovable agreed with the reasoning, confirmed it would remove the variable, and then didn't.

The AI clearly didn't understand why it kept adding this variable back. It wasn't being defiant - it genuinely seemed to think it was helping. But no amount of prompting, explaining, or manual removal could break the pattern.

Claudia (my AI pair programmer, who was observing this farce) found it hilarious. I found it less hilarious. In the end, I did something rare: I surrendered. The variable is still in the codebase. It doesn't do anything, the Supabase client doesn't use it, but it's there. Lovable won.

This became a useful lesson about AI code generation tools: they're brilliant at generating the initial 80% of a solution, but that last 20% - the refinement, the cleanup, the removal of unnecessary cruft - sometimes requires human intervention that the AI just can't process. Even when it claims to understand.

The review workflow

Speaking of that 80/20 split, we developed a proper review process for Lovable-generated code. This wasn't just paranoia - AI-generated code needs human oversight, especially when it's going to production.

The workflow went like this:

  1. Lovable generates code based on a prompt from the Product Owner
  2. I creates a pull request from the feature branch to dev
  3. GitHub Copilot does an automated review, catching obvious issues
  4. I review the code manually, looking for security concerns, deployment gotchas, architectural problems
  5. Claudia reviews it as well, often catching things I missed
  6. We compile a comprehensive list of issues and create a fix prompt for Lovable
  7. Lovable makes another pass, addressing the feedback
  8. Repeat until the code is actually mergeable

This multi-layer review caught things that no single reviewer would spot. Copilot is good at identifying code smells and standard issues. I'm good at spotting deployment risks and security problems. Claudia is good at catching logical inconsistencies and suggesting better patterns.

And, full disclosure: I can read TypeScript and React code, and having spent a few solid weeks doing "self-teaching" on both - I should blog this actually - I understand what's going on for the most part, but I am not a TS/React dev. I need Claudia and Copilot to review this stuff.

One recurring annoyance: GitHub Copilot's automated review insists on suggesting American spellings. "Initialize" instead of "initialise", "color" instead of "colour". Every. Single. Review. I'm a Kiwi, and a civilised person, and this is an app for a UK audience: the codebase uses British English not that tariff-addled colonial shite, but Copilot is having none of it.

The key insight here is that AI code generation isn't "press button, receive working code". It's more like working with a very knowledgeable but inexperienced junior developer who needs guidance on architecture, security, and project-specific patterns. The review process is where the actual quality control happens.

The actual working solution

After all the false starts and battles with Lovable's helpful tendencies, here's what actually works.

The architecture

DEV environment:

  • Supabase project: the original instance we'd been using all along
  • GitHub branches: DEV for integration, and each ticket's work is done in a short-lived feature branch of DEV, eg JRA-1234_remove_project_id_AGAIN
  • Hosting: Lovable's built-in preview hosting (good enough for dev work)
  • Database: DEV Supabase instance with all our test data and experimental migrations

PROD environment:

  • Supabase project: fresh instance created specifically for production
  • GitHub branch: main only
  • Hosting: Netlify (automatic deployment on push to main)
  • Database: PROD Supabase instance with clean migration history

The decision to keep the original instance as DEV rather than promoting it to PROD was deliberate. The existing instance had all our development history, test data, and the occasional experimental schema change. Starting PROD fresh from a clean set of migrations gave us a proper foundation without any cruft.

The deployment process

Frontend deployment happens automatically via Netlify. When code merges to main, Netlify detects the change, runs npm ci followed by npm run build, and serves the static files from the dist folder. Environment variables configured in Netlify's UI override the .env file defaults, giving us PROD Supabase credentials without changing any code.

Backend deployment is deliberately manual. No automation, no automatic deploys on merge, no clever CI/CD pipelines. When we're ready to deploy database changes to production:

npx supabase login
npx supabase link --project-ref <PROD_PROJECT_ID>
npx supabase db push
npx supabase functions deploy

That's it. Four commands, run by a human, who has presumably read the migrations and understood what they're about to do. The db push command reads all migration files from supabase/migrations/ and applies any that haven't been run yet, tracked via the supabase_migrations.schema_migrations table.

This manual approach is a deliberate choice. Database migrations can break things in ways that frontend code changes usually don't. Having a human in the loop - someone who's actually reviewed the SQL and thought about what could go wrong - provides a safety net that automated deployments don't.

And, to be transparent, we are a very small team, and I am a Team Lead / Developer by trade, and all this "systems config shite" is a) beyond me; b) of very little interest to me. I'm doing it because "someone has to do it" (cue: sympathetic violins). I know we should have some sort of CI/CD going on, and eventually we will, but we don't need it for MVP, so I'm managing it manually for now. And - as per above - it's dead easy!

Oh, one thing I didn't mention is this is precisely how I finishes strumming-up the new Supabase instance for prod. Obviously the new Supabase DB was empty... I just did the db push and functions deploy to get it up to date with dev.

Keeping branches in sync

Between tasks, we merge main back to dev to keep them in sync. This prevents the two branches from drifting too far apart and makes the eventual devmain merges simpler. Standard Git workflow stuff, but worth stating explicitly because Lovable's documentation focuses almost entirely on the "single branch, continuous deployment" model.

Edge functions and secrets

Edge functions turned out to be simpler than expected, once we understood how Supabase handles them.

The functions themselves live in supabase/functions/ in the same repository as everything else. They're not a separate codebase or deployment - they're just TypeScript files that get deployed via npx supabase functions deploy. When you push changes to GitHub, Supabase doesn't automatically deploy them (unlike the frontend with Netlify). You need to explicitly run the deploy command.

Environment variables for edge functions work through Supabase's "Edge Function Secrets" system. Some are auto-managed by Supabase itself:

  • SUPABASE_URL
  • SUPABASE_ANON_KEY
  • SUPABASE_SERVICE_ROLE_KEY
  • SUPABASE_DB_URL

These automatically have the correct values for whichever Supabase instance is running the function. DEV Supabase runs the function, it gets DEV credentials. PROD Supabase runs the function, it gets PROD credentials. No configuration needed.

Any other environment variables need to be set manually per environment. For our project, this included:

  • EMAIL_API_KEY - for sending emails
  • BANK_API_ACCESS_TOKEN - for our bank's API integration
  • BANK_API_WEBHOOK_SECRET - webhook signature verification
  • BANK_API_HOST - the API hostname (different for sandbox vs production)

That last one is worth noting: we needed BANK_API_HOST to be api-test.ourbank.com in DEV and api.ourbank.com in PROD. This isn't a secret - it's just configuration. But Supabase treats all edge function environment variables as "secrets" regardless of whether they're actually sensitive.

You set these via the Supabase dashboard (Authentication → Secrets) or via the CLI:

npx supabase secrets set BANK_API_HOST=api.ourbank.com

One gotcha: you can't view the raw values of secrets in the dashboard after they're set, only their hash. This is annoying for non-sensitive configuration values where you might want to verify what's actually configured. But it's a one-time setup per environment, so not a huge problem in practice. And there is that GitHub issue…

This is then used in our edge function along these lines:

function getOurBankApiUrl(): string {
  const apiHost = Deno.env.get('BANK_API_HOST');
  if (!apiHost) {
    throw new Error(
      'Missing bank configuration. ' +
      'Set BANK_API_HOST environment variable.'
    );
  }
  return `https://${apiHost}/some_slug_here`;
}

JWT verification settings

Edge functions by default require valid Supabase JWTs in the Authorization header. For webhooks (or other computer-to-computer calls) or public endpoints, you need to disable this. This goes in supabase/config.toml:

[functions.ourbank-webhook]
verify_jwt = false

[functions.validate-bank-details]
verify_jwt = false

This is the only thing that should be in your config.toml. We initially had 60+ lines of local development server configuration (API ports, database settings, auth config) that Lovable had generated. All unnecessary - that configuration is for running Supabase locally, which we're not doing. The JWT settings are needed because npx supabase functions deploy falls back to these values.

Gotchas and non-obvious behaviours

Here's everything that wasn't obvious from the documentation, discovered through trial and error.

Supabase CLI tool location changed

Older documentation references a .supabase/ directory for CLI state. The CLI now uses supabase/.temp/ instead. When you run npx supabase link, it writes the project reference to supabase/.temp/project-ref, along with version tracking files and connection details.

This directory must be in .gitignore because it's environment-specific. Each developer links to their own preferred project (DEV or PROD), and these link states are stored locally. The directory structure looks like:

supabase/
├── .temp/
│   ├── project-ref          # Linked project ID
│   ├── storage-version      # Version tracking
│   ├── rest-version
│   ├── gotrue-version
│   ├── postgres-version
│   ├── pooler-url          # Connection pooler URL
│   └── cli-latest          # CLI version check
├── functions/              # Edge functions
├── migrations/             # SQL migration files
└── config.toml            # JWT settings only

The "PRODUCTION" badge means nothing

Every Supabase project shows a "PRODUCTION" badge in the dashboard header. This isn't an indicator of whether your project is actually being used for production - it's Supabase's terminology for distinguishing standalone projects from preview branches created via their branching feature. Your DEV instance will show "PRODUCTION" just like your actual production instance. Ignore it.

npx is not npm install -g

This is just me not being a Node dev.

Running npm install -g supabase returns a warning: "Installing Supabase CLI as a global module is not supported." Instead, use npx supabase <command> for everything. This downloads the CLI on-demand, caches it in npm's global cache, and executes it. It's not "installed" in the traditional sense, but it works identically from a user perspective.

First-time use requires npx supabase login which opens a browser for OAuth authentication. This stores an access token locally. Without this, CLI commands fail with "Access token not provided".

Netlify build doesn't touch the database

Common confusion: when Netlify runs npm run build, it only compiles frontend code to static files. It does not run database migrations. Those are a completely separate manual step via npx supabase db push.

This separation is deliberate - frontend and backend deploy independently. You can deploy backend changes without touching the frontend, and vice versa. The deployment order matters though: always deploy backend schema changes first, then frontend code that depends on those changes.

React Router and direct navigation

Our first production bug was a proper head-scratcher. Direct navigation to https://ourapp.netlify.app/products returned a 404, but clicking through from the home page worked fine.

Turns out this is a fundamental disconnect (for me!) between how React Router works and what servers expect. React apps are Single Page Applications - there's literally one HTML file (index.html). React Router handles navigation by swapping components in JavaScript, updating the browser's address bar without making server requests.

When you click a link inside the app (like from / to /products), React Router intercepts it and just changes what's rendered. No server request happens. But when you directly navigate to /products in a fresh browser tab, Netlify's server receives a request for /products, looks for a file called products.html, can't find it, and returns 404.

I'll be honest - this felt like broken behaviour to me. Surely "being able to navigate to any page directly" is web application table stakes? How is this not just working? But the issue is that React and React Router are client-side libraries. They have no control over what the server does. The server needs explicit configuration to serve index.html for all routes. I felt a bit thick when Claudia explained this to me using small words.

The fix is simple: create public/_redirects with one line:

/* /index.html 200

This tells Netlify: for any URL path, serve index.html instead of looking for specific files. The 200 status code means it's a rewrite, not a redirect - the browser URL stays as /products but Netlify serves index.html behind the scenes. React boots up, React Router sees the URL, and renders the correct page.

Why didn't we hit this during development? Because Vite (the dev server) already has this behaviour built in. It knows you're building an SPA and handles it automatically. The problem only appears when you deploy to a production server that doesn't know about your client-side routing.

This should probably be included by default in any SPA scaffolding tool, but it's not. Add it to your "first deploy checklist" and move on.

Environment variable override precedence

Vite's environment variable resolution: actual environment variables (set in Netlify's UI) override .env file values at build time. The .env file serves as the fallback for local development. This means you can commit DEV credentials in .env, and Netlify will use PROD credentials from its configuration without any code changes.

Migration file format matters

Supabase requires migration files to use timestamp format: YYYYMMDDHHMMSS_description.sql. Lovable doesn't use this format by default. You need to explicitly instruct it in your Lovable Knowledge file, and even then it sometimes needs reinforcement in prompts. We added this to our Knowledge file:

Database migrations use timestamp format: YYYYMMDDHHMMSS_description.sql
Create a migration file for EVERY database change (tables, columns, indexes, constraints, RLS policies, functions, triggers)
Never make database changes without generating a corresponding migration file

Even with this, Lovable occasionally forgets, or will use a GUID in place of the description. Code review catches it.

Supabase key format evolution

Supabase has two key formats in the wild:

  • Legacy anon public key (JWT format): rlWuoTpv...
  • New publishable key format: sb_publishable_...

Both work, but the dashboard now recommends using publishable keys. We're using the legacy JWT format for now because both our DEV and PROD instances started with it, and mixing formats between environments seemed like asking for trouble. Migration to the new format is a separate ticket for when we're not in the middle of setting up production.

What we ended up with

After all the false starts, surrenders to Lovable's stubbornness, and discoveries about how these tools actually work, we've got a functioning dev/prod separation that's simple enough to be maintainable.

The key decisions that made it work:

  • Separate Supabase instances rather than using branching features - complete isolation, no data leakage risk
  • Manual database deployments rather than automation - deliberate, reviewed, controlled
  • Short-lived feature branches off a long-lived dev branch - standard Git workflow that the team already understands
  • Netlify for frontend hosting with environment variable overrides - zero-config deployment that just works
  • Multi-layer code review process - Copilot for automated checks, me for architecture and security, Claudia for catching the bits I miss

The workflow is straightforward: develop in Lovable against DEV, review the pull request, merge to dev, smoke test, merge dev to main when ready, manually deploy backend changes, let Netlify handle the frontend. It's not fancy, there's no sophisticated CI/CD pipeline, but it's appropriate for a small team building an MVP.

The biggest lesson: AI code generation tools like Lovable are brilliant at the initial 80% of implementation, but that last 20% - the refinement, security review, deployment considerations - still needs human oversight. A technically proficient human. The review workflow isn't overhead; it's where the actual quality control happens.

Don't get sucked into the hype: "vibe coding" is simply not a thing when it comes to production applications. It's only good for building functional demos.

And sometimes, you just have to accept that VITE_SUPABASE_PROJECT_ID is going to live in your codebase forever, doing absolutely nothing, because Lovable has decided it belongs there and no amount of reasoning will change its mind.

Righto.

--
Adam


P.S. As well as doing all the Netlify config for PROD, I also set up a separate Netlify site for TEST. This one triggers builds off merges to dev and uses the DEV Supabase credentials. It's exposed to the world on the admin-test subdomain (live is just admin). This gives the Product Owner a stable environment to test new features before they go live, running in the same hosting setup as production but against the dev database. Means we can catch UI issues or integration problems in a production-like environment without risking actual production.