Skip to main content
Building a Production Deployment Pipeline from Scratch

Building a Production Deployment Pipeline from Scratch

Andrius LukminasAndrius LukminasFebruary 13, 20268 min read55 views

When we started Boottify, deploying an app meant SSH-ing into the server and running git pull && npm run build. That worked for exactly one app. By the time we had a multi-tenant platform with dozens of customer apps, we needed something much more robust. Here's how we built our production deployment pipeline from the ground up.

THE PROBLEM: DEPLOYING AT SCALE

Our platform lets customers deploy web applications with a click. Each app gets its own subdomain, database, and isolated runtime. The challenge: we needed to orchestrate builds, push container images, provision Kubernetes resources, configure DNS, set up SSL certificates, and stream real-time logs — all reliably, all automatically.

The requirements were clear:

  • Zero-downtime deployments with automatic rollbacks on failure
  • Real-time feedback — users see build logs as they happen
  • Multi-stage Docker builds optimized for Next.js and Node.js apps
  • GitHub integration — push to main, auto-deploy
  • Kubernetes-native — each app in its own namespace with resource limits

DOCKER MULTI-STAGE BUILDS

Our Dockerfile template uses a three-stage build that keeps final images lean:

# Stage 1: Dependencies
FROM node:20-alpine AS deps
WORKDIR /app
COPY package*.json ./
RUN npm ci --production=false

# Stage 2: Build
FROM deps AS builder
COPY . .
RUN npm run build

# Stage 3: Production
FROM node:20-alpine AS runner
RUN addgroup -g 1001 -S nodejs && adduser -S nextjs -u 1001
COPY --from=builder --chown=nextjs:nodejs /app/.next/standalone ./
COPY --from=builder --chown=nextjs:nodejs /app/.next/static ./.next/static
COPY --from=builder --chown=nextjs:nodejs /app/public ./public
USER nextjs
EXPOSE 3000
CMD ["node", "server.js"]

The result: final images under 150MB, compared to 800MB+ with a naive single-stage build. This dramatically reduces push times and cold-start latency.

THE DEPLOYMENT EXECUTOR

The heart of our pipeline is the DeploymentExecutor — a state machine that processes each deployment through discrete steps:

  1. Validation — Check source repo access, Dockerfile presence, resource quota
  2. Build — Docker build with real-time log capture via build stream API
  3. Push — Tag and push to our private registry
  4. Provision — Create/update K8s namespace, deployment, service, and ingress
  5. Health Check — Poll the new pod until it passes readiness probes
  6. DNS & SSL — Configure domain records and provision TLS certificates
  7. Finalize — Update deployment record, notify via webhook

Each step emits events to a WebSocket channel. The client dashboard subscribes and shows real-time progress with log output — no polling required.

GITHUB ACTIONS WEBHOOKS

For auto-deploy on push, we use GitHub's deployment webhooks rather than building our own CI. When a user connects their repo, we set up a GitHub Actions workflow that:

  1. Triggers on push to the configured branch
  2. Sends a webhook to our /api/webhooks/deployment endpoint with commit SHA and metadata
  3. Our API creates a deployment record and queues the build
  4. Real-time status updates flow back via our deployment status API

This approach is simpler and more reliable than running our own build agents. GitHub handles the CI compute; we handle the CD.

KUBERNETES ORCHESTRATION

Each app gets its own Kubernetes namespace following the pattern app-{appId}. Within the namespace:

  • Deployment — Managed pod replicas with resource requests/limits
  • Service — ClusterIP service for internal routing
  • ConfigMap & Secrets — Environment variables managed via our dashboard
  • NetworkPolicy — Isolates tenant traffic

Nginx on the host handles SSL termination and proxies to Kubernetes service ClusterIPs. We chose this over Traefik's built-in TLS because Traefik's global HTTP→HTTPS redirect makes HTTP-01 ACME challenges impossible for custom domains.

REAL-TIME LOG STREAMING

Build logs stream through WebSockets. The architecture:

Docker Build → stdout/stderr → DeploymentExecutor → WebSocket Server → Client Dashboard
                                    ↓
                              Database (persisted for later viewing)

We buffer log lines and flush every 100ms for smooth rendering without overwhelming the browser. Failed builds preserve their complete log output for debugging.

LESSONS LEARNED

  • Always have rollback — We keep the previous deployment's image tag. If health checks fail, we automatically revert to the last known-good state
  • OCI spec matters — Docker image names must be lowercase. We learned this the hard way when an app named "MyApp" crashed the build pipeline
  • K8s namespace cleanup — Deleted apps must have their namespaces fully cleaned up, or you'll hit resource quota limits
  • Webhook idempotency — GitHub can send duplicate webhooks. Every deployment request is deduplicated by commit SHA

The result: a deployment pipeline that handles hundreds of apps, deploys in under 3 minutes, and gives users the real-time feedback they expect from a modern platform.

Related Articles

Comments

0/5000 characters

Comments from guests require moderation.