Migrating from Nginx to Caddy — Why I Switched and Never Looked Back
I used Nginx for years. It’s battle-tested, fast, and everywhere. But every time I set up a new site or subdomain, I found myself copying config blocks, running Certbot, debugging SSL renewals, and wondering why I’m spending time on this instead of building things.
Then I tried Caddy. I haven’t touched Nginx since.
The Nginx Pain
My typical Nginx workflow for a new service looked like this:
- Write a server block in
/etc/nginx/sites-available/ - Symlink it to
sites-enabled/ - Run
certbot --nginx -d subdomain.example.com nginx -t && systemctl reload nginx
Multiply that by 10+ subdomains and it gets tedious. Certbot would occasionally fail to renew, configs would pile up, and every small change required careful syntax checking.
It works. But it’s a lot of ceremony for what should be simple.
Caddy — Automatic Everything
Caddy’s selling point is automatic HTTPS. You point a domain at it, and it handles the certificate — provisioning, renewal, everything. No Certbot, no cron jobs, no manual steps.
Here’s what an Nginx server block with SSL looks like:
server {
listen 80;
server_name app.example.com;
return 301 https://$server_name$request_uri;
}
server {
listen 443 ssl;
server_name app.example.com;
ssl_certificate /etc/letsencrypt/live/app.example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/app.example.com/privkey.pem;
location / {
proxy_pass http://localhost:3000;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
Here’s the same thing in Caddy:
app.example.com {
reverse_proxy localhost:3000
}
That’s it. Three lines. HTTPS is automatic. Headers are set by default. Redirects from HTTP to HTTPS happen automatically.
Multiple Subdomains
I run a lot of services on subdomains — staging environments, internal tools, client demos. With Caddy, each one is just a few lines:
app.example.com {
reverse_proxy localhost:3000
}
api.example.com {
reverse_proxy localhost:8080
}
staging.example.com {
reverse_proxy localhost:3001
}
Each domain gets its own automatic HTTPS certificate. Adding a new service takes seconds — just add a block and Caddy handles the rest. No Certbot, no reload dance.
Running Caddy in Docker
I run Caddy as a Docker container alongside my other services. The setup is straightforward:
caddy:
image: caddy:latest
restart: unless-stopped
ports:
- "80:80"
- "443:443"
volumes:
- ./Caddyfile:/etc/caddy/Caddyfile
- caddy_data:/data
- caddy_config:/config
volumes:
caddy_data:
caddy_config:
The important part is persisting the /data volume — that’s where Caddy stores your certificates. If you lose that volume, it has to re-provision all your certs.
Since my app containers are also running in Docker, I use the container name as the proxy target instead of localhost:
app.example.com {
reverse_proxy myapp:3000
}
As long as Caddy and the app are on the same Docker network, it just works.
What I Don’t Miss
- Certbot renewal failures at 3 AM
- 30-line config blocks for a simple reverse proxy
nginx -tafter every change- Managing separate certificates for each subdomain
- Symlink rituals between
sites-availableandsites-enabled
What to Watch Out For
Caddy isn’t perfect for every case:
- High-traffic production — Nginx is still faster at raw throughput. For most apps, you won’t notice the difference, but if you’re serving millions of requests per second, benchmark first
- Complex rewrite rules — Nginx’s
locationblocks and regex rewrites are more powerful. Caddy handles most cases, but edge cases might require workarounds For my use case — reverse proxying a handful of services with automatic HTTPS — Caddy is significantly simpler.
Related
Git Tags, Drone CI, and Watchtower — A Simple Deployment Pipeline
How I moved from Jenkins to Drone CI with git tags and Watchtower for a lightweight, reliable deployment pipeline.
Streaming IP Cameras to the Browser with RTSP, WebRTC, and DDNS
How I stream RTSP cameras to the browser using RTSPtoWeb and WebRTC — plus how DDNS makes it accessible from anywhere.
Writing Dockerfiles That Build Fast
Practical tips to speed up Docker builds for Node.js projects — layer caching, multi-stage builds, and keeping images small.