Oblien
Tutorial

How to Build a Lovable Clone with AI Agents That Write, Deploy, and Ship Full Apps

Build your own AI app builder like Lovable or Bolt - users describe an app, AI agents write code, set up the database, and deploy it live.

Oblien Team profile picture
Oblien Team
1 min read

How to Build a Lovable Clone with AI Agents That Write, Deploy, and Ship Full Apps

Lovable, Bolt, and v0 changed how people think about building software. You describe what you want, and an AI builds it for you - writing code, setting up databases, and deploying a live app in minutes.

What most people don't realize: you can build this yourself. The core architecture is simpler than it looks. You need an orchestrator agent, a coding agent, isolated environments for each user's app, and a way to deploy them with a public URL.

This guide walks you through the full architecture using open-source AI agents and Oblien's workspace infrastructure.


What Makes an AI App Builder Work

At its core, every Lovable-style product does four things:

  1. Understands intent - the user says "build me a task management app with auth"
  2. Generates code - an AI agent writes the full codebase
  3. Runs the code - the app gets installed, built, and started somewhere
  4. Delivers a live URL - the user sees their app running immediately

The hard part isn't the AI. It's the infrastructure. You need to give each user a safe, isolated environment where their generated code can run - without one user's buggy app crashing another user's.

That's where most people get stuck. Let's solve it.


The Architecture

User: "Build me a todo app with Postgres"


┌──────────────────────────┐
│     Your SaaS Backend     │
│   (handles auth, billing, │
│    project management)     │
└────────────┬─────────────┘


┌──────────────────────────┐
│   Orchestrator Agent      │
│   (OpenClaw / LangChain)  │
│                            │
│   Plans the build:         │
│   1. Scaffold project      │
│   2. Write code            │
│   3. Set up database       │
│   4. Deploy & return URL   │
└──────┬─────────┬─────────┘
       │         │
       ▼         ▼
┌────────────┐  ┌────────────┐
│  Coding    │  │  User's    │
│  Agent     │  │  Workspace │
│ (Claude    │  │ (runs the  │
│  Code)     │  │  final app)│
└────────────┘  └────────────┘

Each box is a separate, hardware-isolated microVM. The orchestrator plans the work, the coding agent writes code, and the user's workspace runs the final app.


Step 1: The Orchestrator Agent

The orchestrator is the brain. When a user says "build me a task management app," it breaks that into concrete steps:

  1. Choose tech stack (Next.js + Postgres for this request)
  2. Scaffold the project structure
  3. Delegate code writing to the coding agent
  4. Create a database workspace
  5. Wire the app to the database
  6. Start the dev server
  7. Return a live preview URL

You can build this with OpenClaw, LangChain, CrewAI, or any agent framework. The orchestrator runs in a permanent workspace on Oblien - it stays alive between requests and has the credentials to create new workspaces.

Here's the key insight: the orchestrator doesn't write code itself. It delegates to specialist agents. This separation keeps things reliable - if the coding agent crashes, the orchestrator retries or switches strategies.


Step 2: The Coding Agent

This is where Claude Code, GPT-Engineer, Aider, or any AI coding agent comes in. It runs in its own isolated workspace separate from the orchestrator.

The orchestrator sends it a task like:

"Create a Next.js 14 app with a task management interface. Include a Task model with title, description, status, and due date. Use Postgres via Prisma. Include CRUD API routes and a clean dashboard UI."

The coding agent:

  • Scaffolds the project
  • Writes all the files
  • Installs dependencies
  • Runs the build to verify everything compiles

It works inside its own microVM, so if it runs something weird during code generation, nothing else is affected.


Step 3: Give Each User Their Own Environment

This is where the magic happens. Every user's app runs in a separate, isolated workspace.

When a user asks for an app:

  1. Create a workspace - a fresh Linux VM with Node.js, booted in ~130ms
  2. Copy the generated code into it
  3. Set up a database - another workspace running Postgres, connected via private network
  4. Start the app - run npm run dev or npm start
  5. Expose a URL - instant HTTPS preview with no extra configuration

Each user's app:

  • Runs in its own Linux kernel (not a shared container)
  • Has encrypted disk storage
  • Can't see or access other users' apps
  • Gets a unique preview URL like https://a1b2c3.preview.oblien.com

This is what Lovable, Bolt, and similar platforms do under the hood - but with Oblien, you get hardware-level isolation instead of container-level isolation.


Step 4: The Database Layer

Most generated apps need a database. Instead of running Postgres on the same machine as the app (risky), create a dedicated database workspace:

  • Air-gapped - no internet access, can't leak data
  • Private network only - only the user's app workspace can reach it
  • Encrypted at rest - AES-256 per workspace
  • Isolated - each user gets their own Postgres instance, not a shared database

When the user deletes their project, both the app workspace and database workspace are destroyed. The encryption keys are deleted, making data cryptographically unrecoverable.


Step 5: Live Preview URLs

Once the app is running, expose the dev server port for the user:

The user gets a URL like https://f7g8h9.preview.oblien.com - instant HTTPS, no DNS setup, no certificate management.

For production deployments, you can map custom domains with auto-provisioned TLS certificates. The same workspace that was a dev environment becomes a production deployment.


Handling the Real-World Edge Cases

What if the AI writes broken code?

The orchestrator retries. It sends the error output back to the coding agent with context:

"The build failed with this error: [error]. Fix the code."

Most coding agents are good at fixing their own mistakes when given the error message. Set a retry limit (3-5 attempts) and show the user a clear error if it still fails.

What if a user's app consumes too much resources?

Each workspace has hard resource limits - CPU, memory, and disk are capped at the VM level. A user's app literally cannot consume more than what's allocated. No noisy neighbor problems.

What about persistent data?

Workspaces persist data in their writable filesystem layer. For permanent apps, use permanent workspaces with auto-restart. For temporary previews, set a TTL so the workspace auto-deletes after 24 hours.

What about scaling?

Oblien workspaces boot in ~130ms. Your orchestrator can create hundreds of user workspaces without bottlenecking. Each workspace is independent - there's no shared state to become a bottleneck.


The Cost Model

This architecture is surprisingly affordable because:

  • Paused workspaces cost almost nothing - when a user isn't looking at their app, pause it
  • Temporary workspaces auto-delete - no forgotten resources running up bills
  • The coding agent only runs during generation - it works for 2-5 minutes, then stops
  • You bill users, not workspaces - charge per app or per-generation, your margins come from efficient workspace management

A typical flow:

  1. User requests an app → coding agent workspace runs for ~3 minutes
  2. App workspace created → runs while user is active
  3. User leaves → workspace pauses after 30 minutes of inactivity
  4. User returns → workspace resumes in seconds

Why This Is Better Than Running Everything on One Server

ApproachRiskUser Experience
All user apps in Docker on one serverOne bad app crashes everythingUnreliable
Kubernetes pods per userComplex, container escapes possibleExpensive to operate
Separate VM per user (traditional)Slow boot (minutes)Users wait too long
MicroVM per user (Oblien)Hardware-isolated, ~130ms bootInstant, secure

What You End Up With

An AI app builder where:

  • Users describe what they want in plain English
  • AI agents write, test, and deploy the code
  • Every app runs in its own isolated environment
  • Each app gets an instant preview URL
  • Databases are private and encrypted
  • The whole thing scales automatically

You're not building the AI model - you're orchestrating existing agents (Claude Code, OpenClaw, GPT-4) and giving them safe infrastructure to work with. The infrastructure is the hard part, and platforms like Oblien handle it so you can focus on the product.


Getting Started

  1. Set up an orchestrator agent in a permanent Oblien workspace
  2. Connect your preferred coding agent (Claude Code, Aider, etc.)
  3. Build the user-facing frontend (your SaaS dashboard)
  4. Wire up workspace creation per user request
  5. Add preview URL generation and database provisioning

The AI will keep getting better. The infrastructure patterns stay the same.

Read moreMulti-Agent Architecture Guide | Oblien Documentation