Oblien
Tutorial

How to Give Each User Their Own Isolated Sandbox in Your SaaS

Create per-user sandboxed environments for your SaaS - completely isolated, zero shared infrastructure. Ideal for code execution and AI tools.

Oblien Team profile picture
Oblien Team
1 min read

How to Give Each User Their Own Isolated Sandbox in Your SaaS

If you're building a SaaS product where users can run code, deploy applications, or interact with AI agents, you have a hard problem: how do you make sure User A can't access User B's data, processes, or environment?

Containers help, but they share a kernel. Network policies help, but they're easy to misconfigure. IAM roles help, but they're complex and error-prone at scale.

The cleanest solution is the simplest: give each user their own virtual machine. Not a shared container. Not a namespace within a cluster. A real, hardware-isolated environment that has zero visibility into any other user's environment.

This guide walks through how to build per-user sandboxed environments for your SaaS.


The problem with shared infrastructure

Most SaaS platforms share infrastructure between users. Your application handles multi-tenancy at the code level - checking user IDs, filtering database queries, scoping file paths.

This works until it doesn't:

  • A bug in your tenant filter exposes one user's data to another. This has happened to Salesforce, Microsoft, and dozens of startups.
  • A container escape gives an attacker access to the host, where every other user's container also runs.
  • A misconfigured network policy lets one user's service discover and connect to another user's internal endpoints.
  • Resource abuse by one user (CPU-intensive code, memory leaks) affects everyone on the same machine.

The root cause is that isolation is enforced in software, and software has bugs.


Hardware isolation changes the equation

When each user gets their own microVM, the isolation is enforced by the CPU's virtualization hardware (Intel VT-x / AMD-V). This means:

  • Separate kernels. A kernel exploit in User A's VM doesn't affect User B. They're running different kernel instances.
  • Separate memory. Hardware memory protection (EPT/NPT) prevents one VM from accessing another's memory. Side-channel attacks that work across containers don't work across VMs.
  • Separate disks. Each VM has its own encrypted block device. There's no shared filesystem layer.
  • Separate networks. Each VM has its own virtual network interface. There's no shared network bridge.

You don't need to trust your multi-tenancy code to be perfect anymore. Even if there's a bug, the hardware prevents cross-tenant access.


How to implement per-user sandboxes on Oblien

Oblien makes this practical by letting you create microVM workspaces programmatically and instantly.

The architecture

┌──────────────────────────────────────┐
│           Your SaaS Backend          │
│                                      │
│  User signs up → create workspace    │
│  User runs code → exec in workspace  │
│  User leaves → pause/destroy         │
│                                      │
│  Oblien SDK manages all workspaces   │
└────────┬────────────┬────────────────┘
         │            │
    ┌────▼────┐  ┌───▼─────┐
    │ User A  │  │ User B  │  ...
    │ VM      │  │ VM      │
    │ 2 CPU   │  │ 2 CPU   │
    │ 4GB RAM │  │ 4GB RAM │
    │ Own disk│  │ Own disk│
    │ Own key │  │ Own key │
    └─────────┘  └─────────┘

Step 1: Create a workspace when a user signs up

When a new user joins your platform, create a workspace for them. It takes ~130ms and gives them a fully isolated environment:

const userWorkspace = await ws.create({
  image: 'node-22',
  cpus: 2,
  memory_mb: 4096,
  writable_size_mb: 5120,
  metadata: {
    userId: user.id,
    plan: user.plan,
  },
});

Store the workspace ID in your database alongside the user record.

Step 2: Execute user actions in their workspace

When a user runs code, deploys something, or triggers an AI task, execute it in their workspace:

const result = await ws.exec(userWorkspace.id, {
  cmd: ['node', '-e', userCode],
  timeout_seconds: 30,
});

The code runs in the user's own VM. It can't see or affect any other user's environment.

Step 3: Scope network access per user

By default, each user's workspace is network-dark. But you might need to allow specific access:

  • Internet for package installation → Enabled by default
  • Access to your API backend → Add your backend's IP to the workspace's ingress rules
  • Access to a shared database → Create a private link from the user's workspace to the database workspace

What you don't do: allow workspaces to talk to each other. User A's workspace has no way to discover or connect to User B's workspace.

Step 4: Manage lifecycle based on usage

Not every user is active all the time. Optimize costs:

  • Active user → Workspace is running
  • Idle user (minutes) → Pause the workspace (freezes memory state, minimal cost)
  • Inactive user (hours) → Stop the workspace (zero compute cost, disk persists)
  • Churned user → Delete the workspace (cryptographic erasure of all data)
// User hasn't been active for 30 minutes
await ws.pause(workspaceId);

// User comes back
await ws.resume(workspaceId);
// Resumes exactly where they left off - same processes, same state

What you get vs. building it yourself

FeatureDIY (K8s + containers)Oblien
Isolation levelContainer (shared kernel)Hardware (own kernel)
Setup time per userSeconds (but complex YAML)~130ms (one API call)
Cross-user visibilityRequires careful network policiesImpossible by default
Disk encryption per userManual setupAutomatic (AES-256, unique key)
Data cleanup on deletionManual (volumes, secrets, etc.)Cryptographic erasure (key destroyed)
Resource meteringPrometheus + custom dashboardsBuilt-in per-workspace metrics
Pause/resumeNot available in containersFull memory-state freeze/resume
Network isolationNetworkPolicy (can be misconfigured)Zero-trust (default deny all)
Time to implementWeeks-monthsHours

Security guarantees for your users

When you use per-user workspaces, you can make strong promises to your users:

"Your code runs in your own VM." Not a container, not a namespace, not a process. A real virtual machine with hardware isolation.

"No one else can access your data." Other users, other workspaces, even Oblien itself can't access the contents of your workspace's encrypted disk or the processes running in memory.

"Your data is cryptographically erased when deleted." When a workspace is destroyed, its unique encryption key is destroyed first. The data is mathematically unrecoverable.

"Resource limits are enforced at the hardware level." One user consuming 100% CPU in their VM doesn't affect any other user's VM. No noisy-neighbor problems.

These aren't marketing claims - they're architectural facts of running on Firecracker microVMs.


Real-world use cases

AI coding platforms - Each user gets a workspace with their own terminal, filesystem, and running processes. They can install packages, run servers, and execute code without affecting others.

Online education - Each student gets a pre-configured development environment. They can break things freely - reset to a clean state with a snapshot restore.

Data analysis tools - Each user gets a workspace with Jupyter, Python, and their datasets. Heavy computations in one workspace don't slow down others.

Plugin marketplaces - Third-party plugins run in isolated workspaces. A malicious plugin can't access the host platform or other users.

Code interview platforms - Each candidate gets a temporary workspace for the duration of the interview. It's destroyed afterward. No state leakage between candidates.


Getting started

The per-user sandbox pattern works with any tech stack. Your backend calls the Oblien SDK to create, manage, and destroy workspaces. Each workspace is a hardware-isolated microVM with its own encrypted disk and zero-trust networking.

The setup is:

  1. Install the SDK - npm install oblien
  2. Create workspaces on user signup
  3. Execute actions in user workspaces
  4. Manage lifecycle (pause, resume, delete) based on activity
  5. Connect workspaces to shared resources via private links

Your SaaS gets enterprise-grade isolation without enterprise-grade infrastructure complexity.

Read the SDK docs →