Deploying to a VPS sounds like the cleanest way to ship: rent a server, SSH in, run Docker, open a few ports, and add SSL. In reality, the first deploy often works... and the second deploy is where everything starts to break. One tiny mismatch (a permission, a port, a proxy rule) and you're back to debugging at midnight. If you've ever stared at a blank page thinking "but it works locally", you're not alone. Deploying web apps to a VPS often fails not because one tool is broken, but because all the pieces don't line up.
In this post, we'll cover:
- What usually breaks
- Why it keeps happening
- How to avoid it with a repeatable process
- When automation is actually worth it
What usually breaks
Here are the most common failure points I made when deploying my web apps to a VPS:
SSH keys & permissions
- Wrong key, wrong user, wrong file permissions (
~/.sshis very picky) - You can connect as
rootbut not as your deploy user (or vice versa)
- Wrong key, wrong user, wrong file permissions (
Ports not exposed
- App listens on
127.0.0.1instead of0.0.0.0 - Docker publishes the port, but the reverse proxy points somewhere else
- App listens on
Docker image not building correctly
- "It builds locally" but in the Dockerfile we have a different base image leading to unexpected errors
- Native dependencies compile differently on the server architecture
- Production image accidentally includes dev dependencies
Reverse proxy misconfiguration
- Wrong upstream (
localhost:3000vs container name) - Missing headers (WebSockets, real IP, host)
- Multiple apps/proxies fighting for port
80/443, there can only be one
- Wrong upstream (
SSL renewal issues
- HTTPS works on day 1, then silently expires later -> especially when using certbot
- ACME challenges fail because port 80/443 isn't reachable
Quick sanity check: when something breaks, it's usually one of these layers: DNS → reverse proxy → container networking → app binding. Debugging gets easier once you always think in that order.
Why this keeps happening
If you're wondering "why does this feel harder than it should be?" — it's because VPS deploys are rarely one clean workflow.
The root causes are boring but real:
Manual steps don't scale
- You do it once from memory
- Next time you forget the exact flags, the exact file, the exact order
Too many tools, each with sharp edges
- SSH, Docker, Compose, firewall rules, systemd, reverse proxy config, DNS, SSL, etc.
- Every tool is fine alone — the integration is what hurts
No repeatable process
- Your "setup" lives across random terminal history, notes, and half-remembered commands
- If you can't reproduce it from scratch, you'll keep re-learning the same lessons
VPS deployment debugging checklist
Here is a checklist to help you debug your VPS deployment:
- [ ] Can I SSH into the server as the deploy user?
- [ ] Is the app binding to 0.0.0.0?
- [ ] Is the container port published and reachable?
- [ ] Is the reverse proxy pointing to the right upstream?
- [ ] Are port 80 and 443 reachable for ACME challenges?
- [ ] Are DNS records pointing to the correct IP?
How to avoid it
You don't need to become a DevOps engineer. You need a standardized setup that reduces the number of decisions you make on every deploy.
What helps the most:
Standardize your server
- Same OS baseline
- Same folder structure
- Same users / permissions model
- Same firewall defaults
Standardize your app runtime
- Same Docker patterns (multi-stage builds, small base images)
- Same environment variable strategy
- Same health checks
Standardize ingress
- One reverse proxy that owns
80/443 - One place for TLS
- One place to add domains
- One reverse proxy that owns
Reduce deploy to one command
- Build → ship → run → verify
- No “oh right, I also need to restart the proxy and open a port”
If you want a true "one command deploy" approach, that’s exactly what I built QuickDeploy for — a CLI to deploy web apps to your own server with a repeatable process.
Not every project needs tooling. But every project benefits from a deploy process you can repeat without thinking.
When automation helps (honestly)
Automation is not magic. It won't save a fundamentally broken app, and it won't choose your architecture for you. But it helps a lot when your problem is repeatability.
It tends to be worth it when you are:
A solo dev
- You want to ship features, not babysit servers
- You don't have "someone on the team who knows infra"
An indie hacker
- You're running small products and experiments
- You’d rather pay with a bit of structure than with random outages
Working on side projects
- You don't deploy daily, so you forget the steps
- You want "future you" to have an easy time
And it's less worth it when:
- You only deploy once a year and don’t care if it takes an afternoon
- You’re already deep into Kubernetes/Terraform land and that’s your happy place
Closing
Deploying to a VPS isn't hard because SSH, Docker, ports, or SSL are individually complicated. It's hard because you're juggling all of them at once, and small mistakes compound fast.
If you're currently stuck, tell me what's breaking for you (SSH, networking, reverse proxy, or SSL) and what stack you're deploying. I'll try to help you as best as I can. Feel free to send me a message in the chat or using my email address max@quickdeploy.dev.