Serverless vs Containers vs VMs

Three places to run your code. Different tradeoffs on cost, control, and cold-start.

1 credit

Quick decision

  • **Bursty, event-driven, predictable code** (webhook handler, image resize, scheduled job) → **Serverless** (Lambda, Cloud Functions, Vercel/Netlify functions).
  • **Persistent service, steady load, custom runtime** (API server, Next.js app) → **Containers** (Cloud Run, Fargate, GKE, Render, Fly.io).
  • **Specialist workloads** (GPU, custom kernel, >10min jobs, weird dependencies) → **VMs** (EC2, GCE, Hetzner).

Serverless tradeoffs

5 items
Pro
Scales to zero. Pay per request. No server management. Fast to prototype.
Con: Cold start
50ms-5s pause when a new instance spins up. Bad UX for user-facing APIs with spiky traffic.
Con: Timeouts
15min hard limit on Lambda; 60s on many HTTPS edge runtimes. Long jobs need queues.
Con: State
Stateless by design. Need Redis/DB/S3 for anything persistent.
Con: Vendor lock-in
Each provider's runtime is different. Migrating = rewrite.

Containers tradeoffs

4 items
Pro
Same image runs locally, CI, prod. Any runtime/language. Predictable cost at scale.
Pro: Long-running
Websockets, background workers, queues — all trivial.
Con: Always-on cost
Even idle services cost something (unless platform auto-sleeps, e.g., Cloud Run).
Con: More to manage
Build pipeline, image registry, secrets, deploys. Mostly automatable but setup cost exists.

When serverless bites

  • Cold starts on user-facing API — 800ms TTFB wrecks p95 latency.
  • Huge cold-start penalty for heavy frameworks (Next.js, Spring Boot). Prefer Go/Rust or native runtimes.
  • Fan-out patterns (one request → many functions) — costs spike unpredictably.
  • Debugging distributed traces across 15 Lambdas is harder than 1 container's logs.

Further reading