G'day:
I'd been putting off setting up proper HTTPS for local development because I figured it'd be a right pain in the arse. Turns out it's dead straightforward when you use mkcert instead of messing about with self-signed certificates and browser warnings.
Here's how I sorted it for a Dockerised app that was running on http://localhost:8080 and needed to work on https://claudia.local instead.
The mkcert approach
mkcert creates locally-trusted development certificates by installing a local certificate authority that your browser automatically trusts. No more clicking through security warnings or adding certificate exceptions.
Installation on Ubuntu/WSL:
sudo apt install mkcert libnss3-tools -y mkcert -install
The -install step creates and installs the local CA. You'll see output like:
Created a new local CA 💥 The local CA is now installed in the system trust store! ⚡️ The local CA is now installed in the Firefox trust store (requires browser restart)! 🦊
Generate certificates for your domain
For Docker setups, you'll want to generate the certificates in a location that matches where your container expects to find them. In my case, that meant creating them in the nginx config directory structure:
cd docker/nginx/etc/ssl/certs mkcert claudia.local
This creates two files: claudia.local.pem (the certificate) and claudia.local-key.pem (the private key).
One thing to watch out for: you probably don't want those .pem files in source control. While mkcert certificates are only locally trusted (so not a huge security risk), it's still good practice to exclude private keys from git. Chuck a .gitignore file in your docker/nginx/etc/ssl/certs/ directory:
*.pem *.key
Hosts file configuration
Tell your system that claudia.local points to localhost by editing your hosts file:
Windows: C:\Windows\System32\drivers\etc\hosts (edit as Administrator)
Linux/macOS: /etc/hosts
Add this line:
127.0.0.1 claudia.local
Docker and nginx configuration
Now for the container bits. First, copy the certificates into your nginx container by updating your Dockerfile:
FROM nginx:bookworm
WORKDIR /usr/share/nginx/
# Copy nginx config
COPY etc/nginx/nginx.conf /etc/nginx/nginx.conf
COPY etc/nginx/conf.d/ /etc/nginx/conf.d/
COPY etc/ssl/certs/*.pem /etc/ssl/certs/
# ... rest of Dockerfile
EXPOSE 80 443
Update your nginx server config to handle both HTTP and HTTPS:
server {
listen 80 default_server;
listen [::]:80 default_server ipv6only=on;
listen 443 ssl default_server;
listen [::]:443 ssl default_server ipv6only=on;
ssl_certificate /etc/ssl/certs/claudia.local.pem;
ssl_certificate_key /etc/ssl/certs/claudia.local-key.pem;
server_name claudia.local;
# ... rest of server config
}
And update your docker-compose.yml to expose the SSL port:
nginx:
container_name: nginx-app
build:
context: nginx
dockerfile: Dockerfile
ports:
- "80:80"
- "443:443"
# ... rest of service config
Sorting out the URLs
Don't forget to update any environment variables or config that reference your old localhost:8080 URLs. In my case, I had a few files that needed changing:
For browser-facing URLs, use the new HTTPS domain:
APP_BASE_URL=https://claudia.local
But keep internal container-to-container communication on the original addresses:
API_BASE_URL=http://host.docker.internal
The containers don't know about your local domain - they still talk to each other via Docker's internal networking.
That's it
Rebuild your containers and you should be able to access your app via https://claudia.local with a proper green padlock. No browser warnings, no certificate exceptions to click through.
The whole thing took about 15 minutes once I stopped overthinking it. Definitely wish I'd done this ages ago instead of putting up with localhost:8080 for everything.
Righto.
--
Adam
References
- "Use HTTPS for local development" by Maud Nalpas
- "FiloSottile / mkcert on github" by Filippo Valsorda
- "Quick and easy local ssl/https with mkcert on ubuntu" by Andrew Stilliard