Oblien
Platform

Oblien vs Traditional Cloud: Why We Built a Platform Just for AI Agents

Why existing cloud platforms fail AI agent workloads, and how purpose-built microVM infrastructure solves what EC2, Lambda, and K8s can't.

Oblien Team profile picture
Oblien Team
1 min read

Oblien vs Traditional Cloud: Why We Built a Platform Just for AI Agents

We didn't build Oblien because the world needed another cloud platform. We built it because AI agents broke every existing one.

Try running an autonomous AI agent on AWS. You'll spend more time on infrastructure than on the agent itself - setting up VPCs, configuring security groups, managing container orchestration, handling secrets, setting up monitoring. And even after all that, your agent is probably running in a Docker container that shares a kernel with everything else on the host.

AI agents are a fundamentally different workload. They deserve infrastructure designed for them.


What Makes AI Agents Different

Traditional software is predictable. A web server handles HTTP requests. A database stores and retrieves data. You know what they'll do because you wrote the code.

AI agents are unpredictable by design. They:

  • Generate and execute code at runtime - you don't know what they'll run until they run it
  • Explore their environment - they look for files, environment variables, network services
  • Make autonomous decisions - they choose what to do next without human approval
  • Process untrusted input - user prompts can contain injections that alter behavior
  • Create subprocesses - agents spawn other agents, scripts, and background tasks

This means every existing assumption about cloud security is wrong for agents. Security groups assume you know which ports you need. IAM policies assume you know which services will be accessed. Container orchestration assumes workloads are deterministic.


What We Tried First (And Why It Didn't Work)

AWS EC2 + Docker

Our first attempt: run agents in Docker containers on EC2 instances.

Problems:

  • Container escapes are real - multiple CVEs per year
  • Shared kernel - all containers share ~300+ system calls
  • Network isolation is opt-in - containers on the same bridge network can see each other
  • No built-in encryption - disk encryption requires manual setup
  • Slow provisioning - spinning up a new EC2 instance takes 30-90 seconds

AWS Lambda

Ironically, Lambda already uses Firecracker microVMs internally. But Lambda is designed for functions, not agents:

  • 15-minute execution limit - agents need to run for hours or days
  • No persistent filesystem - agents need to store state
  • No SSH - can't debug a live agent
  • No custom networking - can't create private agent-to-agent links
  • Fixed runtimes - can't install arbitrary system packages

Kubernetes

The "enterprise" answer to everything:

  • Massive complexity - months of setup before your first agent runs
  • Still uses containers - pods share the node's kernel
  • Network policies are hard - Calico, Cilium, or custom CNI configuration
  • Resource management is painful - pod scheduling, node autoscaling, resource quotas
  • Not designed for ephemeral workloads - creating and destroying pods rapidly is expensive

What We Built Instead

Oblien is purpose-built for workloads that execute arbitrary code in isolated environments. Every design decision comes from that starting point.

Every workspace is a VM, not a container

Each workspace is a Firecracker microVM - its own Linux kernel, its own memory space, its own encrypted filesystem. The hypervisor (KVM) provides hardware-level isolation. There are zero known escapes from Firecracker in production.

Zero-trust networking by default

Every workspace starts completely network-dark. No inbound connections, no visibility to other workspaces. You explicitly enable every network path:

  • Public access? Opt-in per port
  • Workspace-to-workspace? Explicit private links
  • Internet access? On by default, lockable per workspace
  • Outbound restrictions? Allowlist specific hosts

Sub-second provisioning

From API call to running Linux VM: ~130ms. Not warmup. Not cached containers. A real VM with its own kernel.

This changes what's possible:

  • Create a workspace per user request
  • Spin up 50 workers in parallel
  • Disposable sandboxes for every code execution
  • No pre-provisioning needed

Built-in encryption

Every workspace disk is encrypted with AES-256 using a unique per-workspace key. You don't configure this - it's the default. When you delete a workspace, the key is destroyed, making the data cryptographically unrecoverable, and the storage is securely erased.

Agent-native API

Instead of mapping agent concepts onto cloud primitives (instances, security groups, volumes, load balancers), our SDK speaks the language of agents:

  • ws.create() - give an agent a workspace
  • ws.exec() - run a command
  • ws.fs.write() - write a file
  • ws.network.update() - change network rules
  • ws.publicAccess.expose() - get a public URL
  • ws.lifecycle.makePermanent() - keep it running
  • ws.pause() - freeze and save costs

No Terraform. No CloudFormation. No YAML. One function call per action.


The Comparison Nobody Asked For

RequirementEC2 + DockerLambdaKubernetesOblien
Boot a new environment30-90s100-500ms5-30s~130ms
Hardware isolationNo (shared kernel)Yes (Firecracker)No (shared kernel)Yes (Firecracker)
Persistent filesystemManual EBSNoManual PVBuilt-in
Encrypted diskManual setupN/AManual setupAutomatic
Zero-trust networkComplex VPC configLimitedComplex policyDefault
SSH accessManual setupNokubectl execBuilt-in
Public URLsALB/NLB setupAPI GatewayIngress controllerOne API call
Cost when idleFull (unless stopped)FreeNode cost continuesNear-zero (paused)
Setup complexityDaysHoursWeeksMinutes
Code execution sandboxingNoneBuilt-inNoneBuilt-in

Who This Is For

Teams building AI products

You're building with LangChain, CrewAI, OpenClaw, or Claude Code. Your agents need to execute code, manage files, and interact with the world. Oblien gives them a safe place to do all of that.

SaaS platforms with user-facing AI

Your product lets users interact with AI agents that write code, generate content, or automate workflows. Each user needs an isolated environment. Oblien makes per-user microVMs practical.

Developers who want to build, not configure

You want to deploy an agent, not spend two weeks setting up Kubernetes. npm install oblien, write your agent logic, call ws.create(). Done.


What We Don't Do

Oblien is not:

  • A general-purpose cloud - we don't offer managed databases, CDN, or email
  • A hosting platform - we don't deploy your Next.js frontend (though you could use a workspace for it)
  • An AI model provider - bring your own LLM API keys
  • A development IDE - use your preferred editor, SSH in, or use our terminal

We do one thing: programmable, secure, instant workspaces for AI agents and code execution. Everything else plugs in through our API.


The Future We're Building Toward

AI agents will eventually run most software. Not as chatbots or copilots - as autonomous systems that build, deploy, and manage applications. When that happens, every agent will need:

  • A persistent home with state and identity
  • The ability to create and destroy environments on demand
  • Hardware isolation for executing untrusted code
  • Private networking between agent systems
  • Encrypted storage for sensitive data

That's Oblien. We're building the infrastructure layer for the agent-native future.

Try itGetting Started | SDK Reference