Use Cases
There's no single pattern. You might deploy an AI agent that orchestrates ten workspaces. Or run a persistent API server with a custom domain. Or SSH into a dev environment to iterate on code. Or spin up ephemeral sandboxes for each user of your SaaS platform.
All of these are the same primitive - a workspace. What you do with it is up to you.
1. A home for your agent
The simplest pattern: one workspace, always running, your agent lives in it.
No sandboxes. No orchestration. Just a permanent VM with your agent framework installed, a persistent filesystem, and an API token. Your agent receives tasks, does work, and stays alive between conversations.
import Oblien from 'oblien';
const client = new Oblien({
clientId: process.env.OBLIEN_CLIENT_ID!,
clientSecret: process.env.OBLIEN_CLIENT_SECRET!,
});
const ws = client.workspaces;
// One permanent workspace - your agent's home
const agent = await ws.create({
name: 'my-agent',
image: 'node-20',
mode: 'permanent',
config: {
cpus: 4,
memory_mb: 8192,
allow_internet: true, // Agent needs to call LLM APIs
},
});
// Deploy your agent framework as a managed workload - survives reconnects
await ws.workloads.create(agent.id, {
name: 'agent',
cmd: ['node', '/agent/index.js'],
env: { OPENAI_API_KEY: process.env.OPENAI_API_KEY! },
restart: 'on-failure',
});
// That's it. Your agent is live.Best for: Personal agents, single-purpose bots, agents that don't need to touch anything else.
2. A persistent service - no agent required
Run an API server, a background worker, or a database as a permanent workspace. Map it to a custom domain, wire it to other workspaces over the private network, and let it run. No agent orchestration needed - just deploy and expose.
import Oblien from 'oblien';
const client = new Oblien({
clientId: process.env.OBLIEN_CLIENT_ID!,
clientSecret: process.env.OBLIEN_CLIENT_SECRET!,
});
const ws = client.workspaces;
// Create a permanent workspace for your API
const api = await ws.create({
name: 'production-api',
image: 'node-20',
mode: 'permanent',
config: { cpus: 2, memory_mb: 4096 },
});
// Clone and install your app via Runtime
const rt = await ws.runtime(api.id);
await rt.exec.run(['bash', '-c', 'git clone https://github.com/org/api.git /app && cd /app && npm install']);
// Run the server as a managed workload - auto-restarts on crash
await ws.workloads.create(api.id, {
name: 'server',
cmd: ['node', '/app/server.js'],
env: { PORT: '3000', NODE_ENV: 'production' },
restart: 'always',
});
// Map a custom domain - route is immediate, HTTPS becomes active once DNS points at the edge
await ws.publicAccess.expose(api.id, {
port: 3000,
domain: 'api.yourdomain.com',
});
// Live at https://api.yourdomain.comThe workspace is the deployment. No containers, no orchestrator, no extra infrastructure. SSH in to debug, check logs via SSE, push updates with git pull and a workload restart.
Best for: API servers, background workers, databases, any always-on service that needs a public URL or private network access.
3. A remote dev environment
SSH into a workspace, install your stack, and develop remotely. Expose a port for live preview. The workspace is your dev machine - persistent filesystem, full root access, and instant boot.
// Create a dev environment
const dev = await ws.create({
name: 'staging',
image: 'node-20',
mode: 'permanent',
config: {
cpus: 4,
memory_mb: 8192,
disk_size_mb: 20480,
},
});
// Enable SSH access
const ssh = await ws.ssh.enable(dev.id);
console.log(ssh.connection.command);
// → ssh oblien@gateway.oblien.com -p 2222 -L staging
// Expose a dev server for live preview
const preview = await ws.publicAccess.expose(dev.id, { port: 3000 });
console.log(preview.url);
// → https://a1b2c3.preview.oblien.com
// Pause when idle - resume instantly when you're back
await ws.pause(dev.id);
// ...
await ws.resume(dev.id);Snapshot your dev environment at any point and restore it later. Share the workspace with a teammate. Or promote it to production by mapping a custom domain - same workspace, no rebuild.
Best for: Remote development, staging environments, pair programming, teams that want consistent reproducible environments.
4. Isolating work from your agent
Your agent's home should stay clean. When it needs to run something it doesn't trust - user-submitted code, an LLM-generated script, a browser automation task - it creates a separate workspace for that work. Not because it has to. Because isolation is the right call.
The separate workspace can be temporary (auto-deletes after 5 minutes) or long-lived (stays alive, agent can come back to it). Either way, if something goes wrong inside it, the agent's home is untouched.
// Agent receives user code to execute
async function runUserCode(code: string) {
// Spin up an isolated workspace - no internet, can't reach the agent VM
const isolated = await ws.create({
image: 'python-3.12',
config: {
ttl: '5m',
remove_on_exit: true,
allow_internet: false,
ingress: [], // Nothing can reach it
},
});
const rt = await ws.runtime(isolated.id);
await rt.files.write({ fullPath: '/task/run.py', content: code });
const result = await rt.exec.run(['python3', '/task/run.py']);
return result; // Workspace auto-deletes - nothing leaks back
}
// Or keep it alive for a long task - destroy it manually when done
const workerWs = await ws.create({
name: 'heavy-task',
image: 'python-3.12',
mode: 'permanent', // Stays alive until you destroy it
config: {
cpus: 8,
memory_mb: 16384,
allow_internet: false,
egress: [agent.info.internal_ip], // Only the agent can receive results
},
});
// ... agent periodically polls for progress ...
await ws.lifecycle.destroy(workerWs.id); // Done. Clean slate.Best for: Running untrusted code, heavy compute you don't want competing with the agent's resources, tasks that need strict egress control.
5. Agent + two persistent workspaces
A common pattern: your agent has a home, a dedicated data workspace it owns, and optionally a tool or app workspace. All three are permanent, wired together over the private network, and each has only the access it needs.
// 1. Agent's home - internet access for LLM APIs
const agent = await ws.create({
name: 'agent',
image: 'node-20',
mode: 'permanent',
config: { cpus: 4, memory_mb: 8192, allow_internet: true },
});
// 2. Data workspace - only the agent can reach it
const data = await ws.create({
name: 'agent-data',
image: 'postgres-16',
mode: 'permanent',
config: {
allow_internet: false,
ingress: [agent.info.internal_ip], // Agent only
},
});
// 3. App workspace - serves users, talks to agent and data internally
const app = await ws.create({
name: 'agent-app',
image: 'node-20',
mode: 'permanent',
config: {
allow_internet: false,
ingress: [
agent.info.internal_ip,
data.info.internal_ip,
'0.0.0.0/0', // Accepts public traffic via proxy
],
},
});
// Expose the app publicly - HTTPS is automatic
const endpoint = await ws.publicAccess.expose(app.id, { port: 3000 });
console.log(endpoint.url); // → https://a1b2c3.preview.oblien.com
// All three talk over private IPs - no public internet between them
const agentRt = await ws.runtime(agent.id);
await agentRt.exec.run(['psql', '-h', data.info.internal_ip, '-U', 'postgres']);The agent is the authority. The data workspace holds state. The app workspace serves users. Nothing is exposed that shouldn't be.
Best for: Any agent that needs persistent storage and a user-facing surface.
6. On-demand workspaces for SaaS - scoped, per-user, on demand
Your agent (or your backend) creates workspaces on demand as users interact with your product. Each workspace is scoped to that user - it can only access that user's data, not anyone else's. When the session ends, it auto-deletes. Or it stays alive so the user can come back.
This is the SaaS model: your backend or agent is the controller. Users never touch other users' workspaces. The API token is scoped so it can only manage what it created.
// When a user starts a session
async function startUserSession(userId: string) {
const userDataWs = await getUserDataWorkspace(userId); // permanent, exists per user
// Create an on-demand workspace scoped to this user
const sessionWs = await ws.create({
name: `session-${userId}`,
image: 'node-20',
config: {
ttl: '30m', // Auto-deletes after 30 minutes of inactivity
remove_on_exit: true,
allow_internet: false,
egress: [userDataWs.info.internal_ip], // Can ONLY reach this user's data
},
});
// Give the user a live environment
const preview = await ws.publicAccess.expose(sessionWs.id, { port: 3000 });
const ssh = await ws.ssh.enable(sessionWs.id);
return { url: preview.url, ssh: ssh.connection.command };
}
// User wants a longer-lived environment - convert to permanent
async function upgradeSession(sessionWsId: string) {
await ws.update(sessionWsId, { mode: 'permanent', config: { ttl: null } });
// Workspace stays alive indefinitely now - user owns this environment
}
// Pause while idle, resume instantly on return
async function pauseOnIdle(id: string) { await ws.pause(id); }
async function resumeOnReturn(id: string) { await ws.resume(id); }Your API token is scoped. Even if it created 100 user workspaces, a workspace created for User A cannot reach User B's workspace - the egress rules enforce that at the network level.
Best for: Online IDEs, AI coding assistants per user, browser automation per session, educational platforms, customer sandboxes.
7. Turning a workspace into a deployment
A workspace is just compute. There's nothing stopping you from running it as production infrastructure - permanent, always on, mapped to a custom domain. Start as a dev environment, go live without changing anything.
// Start as a dev environment
const deployment = await ws.create({
name: 'my-api',
image: 'node-20',
mode: 'permanent',
config: {
cpus: 2,
memory_mb: 4096,
disk_size_mb: 20480, // 20 GB for app files and logs
},
});
// Deploy the app - via your agent, a CI pipeline, or manually
const deployRt = await ws.runtime(deployment.id);
await deployRt.exec.run(['bash', '-c', 'git clone https://github.com/org/api.git /app && cd /app && npm install']);
// Run the server as a managed workload - auto-restarts on crash
await ws.workloads.create(deployment.id, {
name: 'api-server',
cmd: ['node', '/app/server.js'],
env: {
PORT: '3000',
DATABASE_URL: `postgres://${data.info.internal_ip}:5432/prod`,
NODE_ENV: 'production',
},
restart: 'always',
});
// Map a custom domain - route is immediate, HTTPS becomes active once DNS points at the edge
const endpoint = await ws.publicAccess.expose(deployment.id, {
port: 3000,
domain: 'api.yourdomain.com',
});
// Live at https://api.yourdomain.com
// Deploy updates at any time - from an agent, a script, or SSH
await deployRt.exec.run(['bash', '-c', 'cd /app && git pull && npm install']);
await ws.workloads.stop(deployment.id, 'api-server');
await ws.workloads.start(deployment.id, 'api-server');Same workspace, same SDK. No separate deployment infrastructure. The difference between a dev workspace and a production server is mode: 'permanent' and a domain mapping.
Best for: API servers, background services, deployments managed by agents or CI pipelines, anything that needs to be always on with a public URL.
8. Multi-agent with scoped control
Your lead agent coordinates specialist agents - each in their own workspace. The lead can reach them over the private network, assign work, and read results. Crucially, each specialist agent only has access to its own workspace - it can't see or interfere with others.
// Lead agent creates specialists - each is scoped independently
const researcher = await ws.create({
name: 'researcher',
image: 'python-3.12',
mode: 'permanent',
config: {
cpus: 2,
memory_mb: 4096,
allow_internet: true, // Researcher needs web access
ingress: [agent.info.internal_ip], // Only lead agent can reach it
},
});
const coder = await ws.create({
name: 'coder',
image: 'node-20',
mode: 'permanent',
config: {
cpus: 4,
memory_mb: 8192,
allow_internet: false,
ingress: [agent.info.internal_ip], // Only lead agent can reach it
},
});
// Temporary specialist - only needed for this task, self-destructs
const tester = await ws.create({
name: 'tester',
image: 'node-20',
config: {
ttl: '30m',
remove_on_exit: true,
ingress: [agent.info.internal_ip, coder.info.internal_ip],
},
});
// Lead assigns work via Runtime
const researcherRt = await ws.runtime(researcher.id);
await researcherRt.exec.run(['python3', '/agent/research.py']);
const coderRt = await ws.runtime(coder.id);
await coderRt.exec.run(['node', '/agent/implement.js', '--spec', `http://${researcher.info.internal_ip}:8080/results`]);
const testerRt = await ws.runtime(tester.id);
await testerRt.exec.run(['npm', 'test', '--', '--target', `http://${coder.info.internal_ip}:3000`]);
// tester auto-deletes after 30mEach specialist has exactly the access it needs. The researcher has internet. The coder and tester are private. None of them can reach each other - only the lead agent can reach all of them.
Best for: Multi-model pipelines, research + coding + testing flows, parallel specialist agents.
9. Parallel workers, one data store
Your agent fans out temporary workspaces in parallel - each handles one chunk of a job, writes results to a shared data workspace, and self-destructs when done. You control exactly how many workers run and what they can reach.
const items = ['chunk-1', 'chunk-2', 'chunk-3', 'chunk-4', 'chunk-5'];
// Shared results store - permanent, no internet
const store = await ws.create({
name: 'results',
image: 'postgres-16',
mode: 'permanent',
config: { allow_internet: false, ingress: ['10.0.0.0/8'] },
});
// Fan out - one worker per item
const jobs = items.map(async (item) => {
const worker = await ws.create({
image: 'python-3.12',
config: {
ttl: '10m',
remove_on_exit: true,
allow_internet: true,
egress: [item, store.info.internal_ip], // Only target + results store
},
});
const workerRt = await ws.runtime(worker.id);
await workerRt.files.write({
fullPath: '/task/config.json',
content: JSON.stringify({ source: item, db: store.info.internal_ip }),
});
return workerRt.exec.run(['python3', '/task/process.py']);
// worker auto-deletes when done
});
const results = await Promise.all(jobs);Workers are scoped - each one can only reach its specific data source and the shared store. Nothing else. When the job is done, every worker is gone.
Best for: Data pipelines, web scraping, video/image processing, any parallel compute job.
Public access - expose ports, not VMs
Any workspace port becomes a live HTTPS endpoint on demand. You pick which ports to expose. Everything else stays completely unreachable.
How it actually works - and why it's safe
Your VM never gets a public IP. When you expose a port, Oblien's edge layer receives the traffic, terminates HTTPS, and proxies it to your workspace's internal IP on that port. The workspace itself only binds to localhost or its private internal IP - it never touches the public internet directly.
This means:
- A workspace with no exposed ports is completely unreachable from the internet - no IP, no attack surface
- A workspace with an exposed port is reachable only on that port, only via the Oblien proxy - nothing else
- Your app serves plain HTTP internally - Oblien handles TLS, certificates, and routing
- Revoking a port takes effect instantly - the URL stops working immediately, no DNS propagation
// Preview URL - instant, shareable
const preview = await ws.publicAccess.expose(workspaceId, {
port: 3000,
label: 'Frontend',
});
// → https://a1b2c3.preview.oblien.com
// Custom domain - route is immediate, HTTPS becomes active once DNS points at the edge
const prod = await ws.publicAccess.expose(workspaceId, {
port: 3000,
domain: 'app.yourdomain.com',
});
// → https://app.yourdomain.com (cert auto-renews after initial issuance)
// Each port gets its own independent URL
await ws.publicAccess.expose(id, { port: 3000, label: 'Frontend' });
await ws.publicAccess.expose(id, { port: 8080, label: 'API' });
await ws.publicAccess.expose(id, { port: 9090, label: 'WebSocket' });
// Revoke any port instantly
await ws.publicAccess.revoke(workspaceId, 3000);
// URL returns 404 immediately- Up to 20 ports per workspace, each with its own URL or domain
- Works with any HTTP server (Node, Python, Go, static files, WebSockets)
- Non-exposed ports on the same workspace remain completely unreachable
Internal networking - isolated by default
Every workspace is invisible to every other workspace by default - even on the same account. There are no implicit trusts, no shared subnets that "just work". Connectivity is something you explicitly grant, one IP at a time.
The default: nothing can reach anything
A workspace with no ingress configuration cannot receive connections from any other VM - including other workspaces you own. It's a closed box.
// This workspace cannot receive connections from anyone
const isolated = await ws.create({
image: 'postgres-16',
mode: 'permanent',
config: {
allow_internet: false,
// No ingress - unreachable from any direction
},
});Whitelisted connections - you decide who gets in
To allow workspace A to talk to workspace B, you explicitly whitelist A's internal IP in B's ingress rules. Nothing else can reach B.
const wsA = await ws.create({ image: 'node-20', mode: 'permanent' });
// wsB only accepts connections from wsA - nothing else
const wsB = await ws.create({
image: 'postgres-16',
mode: 'permanent',
config: {
allow_internet: false,
ingress: [wsA.info.internal_ip], // Exact IP - not a range
},
});
// wsA connects to wsB directly over private IP
const wsARt = await ws.runtime(wsA.id);
await wsARt.exec.run(['psql', '-h', wsB.info.internal_ip, '-U', 'postgres']);
// Any other workspace trying to reach wsB gets a connection refusedEgress control - lock down what a workspace can call
You control outbound too. A workspace can be allowed to call only specific external hosts or internal IPs - everything else is blocked at the network level.
// This workspace can only reach OpenAI and one specific internal DB
const sandboxed = await ws.create({
image: 'python-3.12',
config: {
allow_internet: true,
egress: [
'api.openai.com', // External LLM API
dataWs.info.internal_ip, // One internal workspace
// Everything else: connection refused
],
},
});This is how scoped SaaS workspaces work - a user's sandbox can only reach that user's data workspace, even if 100 other workspaces exist on the same account.
Full air-gap
For maximum isolation - no inbound, no outbound, nothing:
const airgapped = await ws.create({
image: 'python-3.12',
config: {
allow_internet: false,
ingress: [], // No inbound from any VM
ttl: '5m',
remove_on_exit: true,
},
});
// This workspace can talk to nothing and be reached by nothingIdeal for executing untrusted code where zero data exfiltration is the requirement.