Oblien
Tutorial

How to Ship AI-Generated Apps to Production with Custom Domains

Ship AI-built apps to production with custom domains, auto-SSL, and sleep mode. Turn any workspace into a live deployment - no DevOps needed.

Oblien Team profile picture
Oblien Team
1 min read

How to Ship AI-Generated Apps to Production with Custom Domains

Your AI agent just built an app. It's running in a workspace at a preview URL. Now what?

Most teams hit a wall here. The agent generates great code, but getting it to production - with a custom domain, SSL, auto-restart, monitoring, and cost optimization - requires a separate infrastructure pipeline. Terraform, CI/CD, Docker, load balancers, DNS configuration...

Or you skip all of that. Turn the workspace into a deployment. Connect your domain. Done.

This guide shows how to take anything running in a workspace - whether an AI agent built it or you coded it yourself - and ship it to production in minutes.


The Deployment Pipeline

Workspace (dev)  ──→  Build  ──→  Production Workspace  ──→  Live App

                                    Custom domain
                                    Auto-SSL
                                    Auto-restart
                                    Sleep mode
                                    Monitoring

What makes this different from traditional deployment:

TraditionalOblien Deployment
Set up CI/CD pipelineOne API call
Configure Docker/KubernetesSame workspace, just production mode
Set up SSL with Let's EncryptAuto-provisioned
Configure DNSConnect domain, done
Set up process management (pm2, systemd)Built-in workloads
Configure health checksBuilt-in monitoring
Set up alertingBuilt-in metrics
Estimate: 2-5 daysEstimate: 5 minutes

Step 1: Make It Permanent

Workspaces can be temporary (auto-delete after TTL) or permanent (always-on with auto-restart).

When your app is ready for production, convert the workspace to permanent mode. This enables:

  • Auto-restart - if the VM reboots or crashes, it comes back automatically
  • Persistent storage - data survives restarts
  • No TTL expiry - the workspace runs until you stop it

The same workspace that was your development environment becomes your production server. No migration, no redeployment, no data copying.


Step 2: Set Up Workloads

A workload is a managed background process - think systemd for your app, but simpler.

Create a workload for your production server. Specify the command to run (like npm start or python -m gunicorn app:app), set the working directory and environment variables, and configure the restart policy.

Restart policies:

PolicyBehaviorUse Case
alwaysRestart on any exitProduction web servers
on-failureRestart only on non-zero exitBatch jobs
neverDon't restartOne-time tasks

With always restart policy and max_restarts: 10, your app survives crashes, OOM kills, and unexpected exits. The restart delay prevents crash loops from consuming resources.

Multiple workloads per workspace:

Run your entire stack in one workspace:

  • Workload 1: Next.js production server (port 3000)
  • Workload 2: Background job worker
  • Workload 3: Cron scheduler

Each workload gets independent restart policies, logging, and monitoring.


Step 3: Connect Your Domain

This is where it gets exciting. Expose your app to your custom domain.

When you connect a domain:

  1. SSL certificate - Let's Encrypt certificate auto-provisioned (typically in seconds)
  2. Routing - domain → Oblien edge proxy → your workspace's internal IP
  3. HTTPS enforced - all traffic encrypted, HTTP redirects to HTTPS
  4. www handling - optional www prefix routing

The workspace never gets a public IP. All traffic goes through Oblien's edge proxy, which terminates TLS and forwards the request to your app internally. This means:

  • No open ports on the VM (reduced attack surface)
  • DDoS protection at the edge
  • SSL renewal is automatic (14 days before expiry)
  • No Nginx/Caddy configuration needed

Multiple domains on one workspace

If your app serves multiple domains (e.g., app.example.com for the frontend, api.example.com for the API), connect both. Each gets its own SSL certificate.


Step 4: Expose Preview URLs

Before connecting a custom domain (or in addition to it), you can expose any port as a preview URL.

Each exposed port gets an instant HTTPS URL like https://a1b2c3d4e5f6g7h8.preview.oblien.com. You can expose up to 20 ports per workspace.

This is great for:

  • Staging environments - share a preview URL with your team for review
  • API endpoints - expose your backend API while the frontend uses the custom domain
  • Webhook receivers - give third-party services a URL to call
  • Development previews - show clients work-in-progress before connecting the final domain

Step 5: Enable Sleep Mode

Production workloads aren't always active 24/7. An internal tool might only be used during business hours. A staging environment might be idle most nights.

Sleep mode automatically pauses idle workspaces to save costs. When traffic arrives, the workspace resumes automatically within seconds.

How it works:

Workspaces sleep after a configurable idle period. When the next request arrives, the workspace wakes up and serves it. Active workspaces stay awake; idle workspaces sleep.

Cost savings: A workspace that's active 8 hours/day and sleeping 16 hours/day costs ~66% less than running 24/7. For internal tools and staging environments, sleep mode can reduce costs by 80-90%.


Shipping What AI Agents Build

Here's the full flow for deploying AI-generated apps:

1. Agent builds the app

In a development workspace, the AI agent writes code, installs dependencies, runs tests. The app is working at localhost:3000 inside the workspace.

2. Convert to production

Make the workspace permanent and create a production workload with your start command and always restart policy.

3. Connect a domain

Link your custom domain. SSL provisions automatically. Your app is now live at https://app.yourdomain.com.

4. Enable sleep mode (optional)

For apps that don't need 24/7 uptime, enable sleep mode to save costs.

5. Agent iterates

Need changes? The AI agent modifies the code in the same workspace. The workload auto-restarts with the new code. Zero-downtime updates for most frameworks that support graceful reload.


Resource Tiers for Production

Choose the right size for your deployment:

TierCPUMemoryBest For
Lightweight0.5 vCPU512 MBStatic sites, simple APIs
Standard1 vCPU1 GBMost web apps, dashboards
Performance2 vCPU2 GBHigh-traffic apps, data processing
Enterprise4 vCPU4 GBComplex apps, multiple services

You can change the resource tier at any time. The workspace restarts with the new allocation.


Production Modes

Host mode

Your app runs as a live process in the workspace. Best for:

  • Dynamic web applications (Next.js, Express, Django, Rails)
  • API servers
  • WebSocket applications
  • Background workers

Static mode

Your built files are served directly from the edge CDN. Best for:

  • Static sites (HTML, CSS, JS)
  • Single-page applications (React, Vue, Angular builds)
  • Documentation sites
  • Landing pages

Static mode is faster (CDN-cached at the edge) and cheaper (no running VM).


Monitoring Your Deployment

Every production workspace has built-in monitoring:

  • CPU usage - real-time and historical
  • Memory usage - allocated vs used
  • Disk I/O - read/write throughput
  • Network traffic - inbound/outbound bytes
  • Uptime - seconds since last boot
  • Workload status - running, restarting, failed

Access metrics through the dashboard UI or the API for integration with your own monitoring stack.

Workload logs

View stdout/stderr from each workload in real-time via log streaming. No need to SSH into the workspace to check logs - they're available through the dashboard and API.


The Cost

Production deployments are billed per-second for actual compute. Pricing varies by resource tier - check the pricing page for current rates.

For a standard (1 CPU, 1 GB) app running 24/7:

  • Competitive with traditional hosting
  • With sleep mode (8hr active/day): significantly less

Compare to traditional hosting:

  • AWS EC2 t3.small: $15/month (no auto-SSL, no domain routing, manual setup)
  • Heroku Standard: $25/month (limited features)
  • Vercel Pro: $20/month (serverless only)

Oblien deployments include everything - compute, SSL, domain routing, monitoring, auto-restart - in one cost.


From Zero to Production: Complete Example

Here's a complete scenario. You're building a SaaS product with AI:

Minute 0-3: AI agent generates a Next.js app with a dashboard, auth, and API routes in a development workspace.

Minute 3-4: You review the generated code. Looks good. Convert the workspace to permanent mode and set up a production workload.

Minute 4-5: Connect app.yoursaas.com. SSL auto-provisions. Your SaaS is live.

Day 2: A user reports a bug. You tell the AI agent to fix it. The agent modifies the code, the workload restarts, the fix is live in seconds.

Day 7: You add sleep mode since the app is only used during business hours. Monthly cost drops by 60%.

Day 30: Traffic is growing. You upgrade from Lightweight to Standard tier. The workspace restarts with more resources. Zero migration needed.


Summary

Shipping AI-generated apps to production:

  1. Make it permanent - auto-restart, persistent storage
  2. Create a workload - managed process with restart policy
  3. Connect your domain - auto-SSL, edge proxy, instant routing
  4. Enable sleep mode - save costs when idle
  5. Monitor - built-in metrics, logs, and status tracking

No Terraform. No Docker. No CI/CD pipeline. No Nginx configuration. No manual SSL management.

The workspace where the agent built the app IS the production server. Same environment, same files, same running processes - just with production-grade reliability and a public domain.

Related readingFrom Idea to Deployed App in 60 Seconds | Oblien Documentation