Use Cases
There's no single pattern. Your agent might live in one workspace forever. Or it might create ten more on demand, each scoped, isolated, and gone when the job is done. Or it might run a workspace as a permanent deployment serving live traffic with a custom domain.
All of these are the same primitive - a workspace. What you do with it is up to you.
1. A home for your agent
The simplest pattern: one workspace, always running, your agent lives in it.
No sandboxes. No orchestration. Just a permanent VM with your agent framework installed, a persistent filesystem, and an API token. Your agent receives tasks, does work, and stays alive between conversations.
import { OblienClient } from 'oblien';
import { Workspace } from 'oblien/workspace';
const client = new OblienClient({
clientId: process.env.OBLIEN_CLIENT_ID!,
clientSecret: process.env.OBLIEN_CLIENT_SECRET!,
});
const ws = new Workspace(client);
// One permanent workspace - your agent's home
const agent = await ws.create({
name: 'my-agent',
image: 'node-20',
mode: 'permanent',
config: {
cpus: 4,
memory_mb: 8192,
allow_internet: true, // Agent needs to call LLM APIs
},
});
// Deploy your agent framework as a managed workload - survives reconnects
await ws.workloads.create(agent.id, {
name: 'agent',
cmd: ['node', '/agent/index.js'],
env: { OPENAI_API_KEY: process.env.OPENAI_API_KEY! },
restart: 'on-failure',
});
// That's it. Your agent is live.Best for: Personal agents, single-purpose bots, agents that don't need to touch anything else.
2. Isolating work from your agent
Your agent's home should stay clean. When it needs to run something it doesn't trust - user-submitted code, an LLM-generated script, a browser automation task - it creates a separate workspace for that work. Not because it has to. Because isolation is the right call.
The separate workspace can be temporary (auto-deletes after 5 minutes) or long-lived (stays alive, agent can come back to it). Either way, if something goes wrong inside it, the agent's home is untouched.
// Agent receives user code to execute
async function runUserCode(code: string) {
// Spin up an isolated workspace - no internet, can't reach the agent VM
const isolated = await ws.create({
image: 'python-3.12',
config: {
ttl: '5m',
remove_on_exit: true,
allow_internet: false,
ingress: [], // Nothing can reach it
},
});
await ws.fs.write(isolated.id, '/task/run.py', code);
const result = await ws.exec(isolated.id, { cmd: ['python3', '/task/run.py'] });
return result; // Workspace auto-deletes - nothing leaks back
}
// Or keep it alive for a long task - destroy it manually when done
const workerWs = await ws.create({
name: 'heavy-task',
image: 'python-3.12',
mode: 'permanent', // Stays alive until you destroy it
config: {
cpus: 8,
memory_mb: 16384,
allow_internet: false,
egress: [agent.info.internal_ip], // Only the agent can receive results
},
});
// ... agent periodically polls for progress ...
await ws.destroy(workerWs.id); // Done. Clean slate.Best for: Running untrusted code, heavy compute you don't want competing with the agent's resources, tasks that need strict egress control.
3. Agent + two persistent workspaces
A common pattern: your agent has a home, a dedicated data workspace it owns, and optionally a tool or app workspace. All three are permanent, wired together over the private network, and each has only the access it needs.
// 1. Agent's home - internet access for LLM APIs
const agent = await ws.create({
name: 'agent',
image: 'node-20',
mode: 'permanent',
config: { cpus: 4, memory_mb: 8192, allow_internet: true },
});
// 2. Data workspace - only the agent can reach it
const data = await ws.create({
name: 'agent-data',
image: 'postgres-16',
mode: 'permanent',
config: {
allow_internet: false,
ingress: [agent.info.internal_ip], // Agent only
},
});
// 3. App workspace - serves users, talks to agent and data internally
const app = await ws.create({
name: 'agent-app',
image: 'node-20',
mode: 'permanent',
config: {
allow_internet: false,
ingress: [
agent.info.internal_ip,
data.info.internal_ip,
'0.0.0.0/0', // Accepts public traffic via proxy
],
},
});
// Expose the app publicly - HTTPS is automatic
const endpoint = await ws.publicAccess.expose(app.id, { port: 3000 });
console.log(endpoint.url); // → https://a1b2c3.preview.oblien.com
// All three talk over private IPs - no public internet between them
await ws.exec(agent.id, {
cmd: ['psql', '-h', data.info.internal_ip, '-U', 'postgres'],
});The agent is the authority. The data workspace holds state. The app workspace serves users. Nothing is exposed that shouldn't be.
Best for: Any agent that needs persistent storage and a user-facing surface.
4. On-demand workspaces for SaaS - scoped, per-user, triggered by your agent
Your agent (or your backend) creates workspaces on demand as users interact with your product. Each workspace is scoped to that user - it can only access that user's data, not anyone else's. When the session ends, it auto-deletes. Or it stays alive so the user can come back.
This is the SaaS model: your agent is the controller. Users never touch other users' workspaces. The agent's API token is scoped so it can only manage what it created.
// When a user starts a session
async function startUserSession(userId: string) {
const userDataWs = await getUserDataWorkspace(userId); // permanent, exists per user
// Create an on-demand workspace scoped to this user
const sessionWs = await ws.create({
name: `session-${userId}`,
image: 'node-20',
config: {
ttl: '30m', // Auto-deletes after 30 minutes of inactivity
remove_on_exit: true,
allow_internet: false,
egress: [userDataWs.info.internal_ip], // Can ONLY reach this user's data
},
});
// Give the user a live environment
const preview = await ws.publicAccess.expose(sessionWs.id, { port: 3000 });
const ssh = await ws.ssh.enable(sessionWs.id);
return { url: preview.url, ssh: ssh.connection.command };
}
// User wants a longer-lived environment - convert to permanent
async function upgradeSession(sessionWsId: string) {
await ws.update(sessionWsId, { mode: 'permanent', config: { ttl: null } });
// Workspace stays alive indefinitely now - user owns this environment
}
// Pause while idle, resume instantly on return
async function pauseOnIdle(id: string) { await ws.pause(id); }
async function resumeOnReturn(id: string) { await ws.resume(id); }Your agent's API token is scoped. Even if it created 100 user workspaces, a workspace created for User A cannot reach User B's workspace - the egress rules enforce that at the network level.
Best for: Online IDEs, AI coding assistants per user, browser automation per session, educational platforms, customer sandboxes.
5. Turning a workspace into a deployment
A workspace is just compute. There's nothing stopping you from running it as production infrastructure - permanent, always on, mapped to a custom domain. Start as a dev environment, go live without changing anything.
// Start as a dev environment
const deployment = await ws.create({
name: 'my-api',
image: 'node-20',
mode: 'permanent',
config: {
cpus: 2,
memory_mb: 4096,
writable_size_mb: 20480, // 20 GB for app files and logs
},
});
// Your agent deploys the app
await ws.exec(deployment.id, {
cmd: ['bash', '-c', 'git clone https://github.com/org/api.git /app && cd /app && npm install'],
});
// Run the server as a managed workload - auto-restarts on crash
await ws.workloads.create(deployment.id, {
name: 'api-server',
cmd: ['node', '/app/server.js'],
env: {
PORT: '3000',
DATABASE_URL: `postgres://${data.info.internal_ip}:5432/prod`,
NODE_ENV: 'production',
},
restart: 'always',
});
// Map a custom domain - HTTPS provisioned automatically
const endpoint = await ws.publicAccess.expose(deployment.id, {
port: 3000,
domain: 'api.yourdomain.com',
});
// Live at https://api.yourdomain.com
// Your agent can deploy updates at any time
await ws.exec(deployment.id, {
cmd: ['bash', '-c', 'cd /app && git pull && npm install'],
});
await ws.workloads.restart(deployment.id, 'api-server');Same workspace, same SDK. No separate deployment infrastructure. The difference between a dev workspace and a production server is just mode: 'permanent' and a domain mapping.
Best for: API servers, background services, agent-managed deployments, anything that needs to be always on with a public URL.
6. Multi-agent with scoped control
Your lead agent coordinates specialist agents - each in their own workspace. The lead can reach them over the private network, assign work, and read results. Crucially, each specialist agent only has access to its own workspace - it can't see or interfere with others.
// Lead agent creates specialists - each is scoped independently
const researcher = await ws.create({
name: 'researcher',
image: 'python-3.12',
mode: 'permanent',
config: {
cpus: 2,
memory_mb: 4096,
allow_internet: true, // Researcher needs web access
ingress: [agent.info.internal_ip], // Only lead agent can reach it
},
});
const coder = await ws.create({
name: 'coder',
image: 'node-20',
mode: 'permanent',
config: {
cpus: 4,
memory_mb: 8192,
allow_internet: false,
ingress: [agent.info.internal_ip], // Only lead agent can reach it
},
});
// Temporary specialist - only needed for this task, self-destructs
const tester = await ws.create({
name: 'tester',
image: 'node-20',
config: {
ttl: '30m',
remove_on_exit: true,
ingress: [agent.info.internal_ip, coder.info.internal_ip],
},
});
// Lead assigns work via internal IPs
await ws.exec(researcher.id, { cmd: ['python3', '/agent/research.py'] });
await ws.exec(coder.id, {
cmd: ['node', '/agent/implement.js', '--spec', `http://${researcher.info.internal_ip}:8080/results`],
});
await ws.exec(tester.id, {
cmd: ['npm', 'test', '--', '--target', `http://${coder.info.internal_ip}:3000`],
});
// tester auto-deletes after 30mEach specialist has exactly the access it needs. The researcher has internet. The coder and tester are private. None of them can reach each other - only the lead agent can reach all of them.
Best for: Multi-model pipelines, research + coding + testing flows, parallel specialist agents.
7. Parallel workers, one data store
Your agent fans out temporary workspaces in parallel - each handles one chunk of a job, writes results to a shared data workspace, and self-destructs when done. You control exactly how many workers run and what they can reach.
const items = ['chunk-1', 'chunk-2', 'chunk-3', 'chunk-4', 'chunk-5'];
// Shared results store - permanent, no internet
const store = await ws.create({
name: 'results',
image: 'postgres-16',
mode: 'permanent',
config: { allow_internet: false, ingress: ['10.0.0.0/8'] },
});
// Fan out - one worker per item
const jobs = items.map(async (item) => {
const worker = await ws.create({
image: 'python-3.12',
config: {
ttl: '10m',
remove_on_exit: true,
allow_internet: true,
egress: [item, store.info.internal_ip], // Only target + results store
},
});
await ws.fs.write(worker.id, '/task/config.json', JSON.stringify({
source: item,
db: store.info.internal_ip,
}));
return ws.exec(worker.id, { cmd: ['python3', '/task/process.py'] });
// worker auto-deletes when done
});
const results = await Promise.all(jobs);Workers are scoped - each one can only reach its specific data source and the shared store. Nothing else. When the job is done, every worker is gone.
Best for: Data pipelines, web scraping, video/image processing, any parallel compute job.
Public access - expose ports, not VMs
Any workspace port becomes a live HTTPS endpoint on demand. You pick which ports to expose. Everything else stays completely unreachable.
How it actually works - and why it's safe
Your VM never gets a public IP. When you expose a port, Oblien's edge layer receives the traffic, terminates HTTPS, and proxies it to your workspace's internal IP on that port. The workspace itself only binds to localhost or its private internal IP - it never touches the public internet directly.
This means:
- A workspace with no exposed ports is completely unreachable from the internet - no IP, no attack surface
- A workspace with an exposed port is reachable only on that port, only via the Oblien proxy - nothing else
- Your app serves plain HTTP internally - Oblien handles TLS, certificates, and routing
- Revoking a port takes effect instantly - the URL stops working immediately, no DNS propagation
// Preview URL - instant, shareable
const preview = await ws.publicAccess.expose(workspaceId, {
port: 3000,
label: 'Frontend',
});
// → https://a1b2c3.preview.oblien.com
// Custom domain - HTTPS provisioned automatically, CNAME to preview URL
const prod = await ws.publicAccess.expose(workspaceId, {
port: 3000,
domain: 'app.yourdomain.com',
});
// → https://app.yourdomain.com (cert provisioned, auto-renewed)
// Each port gets its own independent URL
await ws.publicAccess.expose(id, { port: 3000, label: 'Frontend' });
await ws.publicAccess.expose(id, { port: 8080, label: 'API' });
await ws.publicAccess.expose(id, { port: 9090, label: 'WebSocket' });
// Revoke any port instantly
await ws.publicAccess.revoke(workspaceId, 3000);
// URL returns 404 immediately- Up to 20 ports per workspace, each with its own URL or domain
- Works with any HTTP server (Node, Python, Go, static files, WebSockets)
- Non-exposed ports on the same workspace remain completely unreachable
Internal networking - isolated by default
Every workspace is invisible to every other workspace by default - even on the same account. There are no implicit trusts, no shared subnets that "just work". Connectivity is something you explicitly grant, one IP at a time.
The default: nothing can reach anything
A workspace with no ingress configuration cannot receive connections from any other VM - including other workspaces you own. It's a closed box.
// This workspace cannot receive connections from anyone
const isolated = await ws.create({
image: 'postgres-16',
mode: 'permanent',
config: {
allow_internet: false,
// No ingress - unreachable from any direction
},
});Whitelisted connections - you decide who gets in
To allow workspace A to talk to workspace B, you explicitly whitelist A's internal IP in B's ingress rules. Nothing else can reach B.
const wsA = await ws.create({ image: 'node-20', mode: 'permanent' });
// wsB only accepts connections from wsA - nothing else
const wsB = await ws.create({
image: 'postgres-16',
mode: 'permanent',
config: {
allow_internet: false,
ingress: [wsA.info.internal_ip], // Exact IP - not a range
},
});
// wsA connects to wsB directly over private IP
await ws.exec(wsA.id, {
cmd: ['psql', '-h', wsB.info.internal_ip, '-U', 'postgres'],
});
// Any other workspace trying to reach wsB gets a connection refusedEgress control - lock down what a workspace can call
You control outbound too. A workspace can be allowed to call only specific external hosts or internal IPs - everything else is blocked at the network level.
// This workspace can only reach OpenAI and one specific internal DB
const sandboxed = await ws.create({
image: 'python-3.12',
config: {
allow_internet: true,
egress: [
'api.openai.com', // External LLM API
dataWs.info.internal_ip, // One internal workspace
// Everything else: connection refused
],
},
});This is how scoped SaaS workspaces work - a user's sandbox can only reach that user's data workspace, even if 100 other workspaces exist on the same account.
Full air-gap
For maximum isolation - no inbound, no outbound, nothing:
const airgapped = await ws.create({
image: 'python-3.12',
config: {
allow_internet: false,
ingress: [], // No inbound from any VM
ttl: '5m',
remove_on_exit: true,
},
});
// This workspace can talk to nothing and be reached by nothingIdeal for executing untrusted code where zero data exfiltration is the requirement.