The Problem with Generic Deployments
Our marketplace offers hundreds of one-click deployable applications. WordPress, Ghost, Directus, Nextcloud, Gitea, and hundreds more — each with its own set of required environment variables, expected container port, data volume paths, and database credential format.
Until now the wizard handed every app the same Kubernetes manifest: container port 3000, health endpoint /api/health, one storage volume at /app/storage, and a flat DATABASE_URL env var. That works well for Node.js SaaS apps built for this platform. It works poorly for everything else.
Ghost expects database__connection__host. WordPress expects WORDPRESS_DB_HOST. Laravel expects DB_HOST. And most apps don't listen on port 3000.
The configSchema: Per-App Deployment Knowledge
We introduced a configSchema field on app_templates. It stores a structured JSON definition of everything the deployer needs to know about an app's runtime requirements:
- Environment variable groups — labelled sections (Database, Mail, Storage, etc.) containing individual fields with types, defaults, descriptions, and a flag for whether the platform should auto-fill provisioned credentials
- Container port — the port the app actually listens on
- Health endpoint — the path used for readiness and liveness probes
- Required volumes — extra persistent mounts beyond the default storage volume, with sizes
- Environment mappings (
envMappings) — a translation table from the platform's provisioned credential names to the app's expected variable names
We wrote a seed script that populated configSchema for all 522 apps currently in the marketplace. The 30 most-deployed applications got detailed schemas with real defaults and documentation; the remainder got sensible baseline configurations.
envMappings: Credential Translation
When the platform provisions a MySQL database for a deployment, it generates DATABASE_HOST, DATABASE_NAME, DATABASE_USER, and DATABASE_PASSWORD. An app like Ghost doesn't use those names — it expects a nested config key format.
envMappings solves this without any manual work from the operator:
"envMappings": {
"DATABASE_HOST": "database__connection__host",
"DATABASE_NAME": "database__connection__database",
"DATABASE_USER": "database__connection__user",
"DATABASE_PASSWORD": "database__connection__password"
}
The configure step reads the provisioned credentials and applies the mapping before writing the Kubernetes secret. The app receives its credentials in exactly the format it expects, and the operator never has to know the difference.
Wizard Step 4: Configure
The deployment wizard previously had five steps: Plan → Resources → Payment → Deploy → Verify. We inserted a new Configure step between Plan and Payment.
The step auto-skips entirely when the selected app's configSchema has no user-facing fields — so simple apps remain a clean five-step flow. When fields exist, the step renders a dynamic form with:
- Collapsible field groups (Database, Application, Mail, Storage…)
- Auto-filled values for fields the platform can derive from the provisioning plan
- Per-field validation with type checking (string, number, boolean, URL, email, select)
- Descriptions and placeholder examples pulled from the schema
- A clear indicator when a field will be injected automatically by the provisioner
Kubernetes Provisioner: Un-hardcoded
The previous K8s provisioner hardcoded port 3000 in four different places and always used /api/health for readiness and liveness probes. Both values are now read from AppDeploymentConfig:
containerPort— defaults to 3000 if not sethealthEndpoint— defaults to/api/healthif not setextraVolumes— additionalPersistentVolumeClaim+VolumeMountpairs from the schemaresourceLimits— plan-based memory and CPU limits passed through instead of hardcoded 256Mi/512Mi
The provisioner also now creates a NetworkPolicy, ResourceQuota, and LimitRange for each app namespace. These are best-effort (non-fatal if they fail) but mean every new app deployment gets namespace-level isolation and resource guardrails out of the box.
Pods also received a lifecycle.preStop hook (sleep 5 && kill -SIGTERM 1) so rolling updates drain in-flight requests before the container terminates.
Per-App Nginx Configuration
Subdomain deployments now get a per-app nginx config generated by the deploy step rather than a shared catch-all. This allows custom proxy headers, client body size limits, and timeout values to be defined per app type without a global config change.
Bug Fixes Bundled in This Release
- Hetzner DNS encodeURIComponent bug — rrset names containing dots were being double-encoded, breaking GET/PUT/DELETE for any DNS record. Fixed in
hetzner.ts. - Deploy API response unwrapping — the deploy route returned
{ data: { deploymentId } }but the wizard was readingdata.deploymentId. Fixed to unwrap correctly. - postDeployConfigure return type — was returning a K8s secret reference string on failure, which downstream code tried to use as a boolean. Now returns a proper boolean.
- Deploy step guard — step 6 (Deploy) is now blocked unless a
deploymentIdexists, preventing confusing half-submitted states.
Debug Logger
A collapsible debug logger panel (debug-logger.tsx) was added to the deployment wizard. Operators can click the ./. button in the wizard footer to stream real-time deployment logs directly in the browser, without opening a terminal. Each log line is timestamped, colour-coded by level, and filterable.



