Oblien
Tutorial

How to Run Any Open-Source AI Agent Framework in the Cloud

Deploy LangChain, CrewAI, AutoGen, or any AI agent framework in isolated cloud environments. Pre-built images and one-command setup.

Oblien Team profile picture
Oblien Team
1 min read

How to Run Any Open-Source AI Agent Framework in the Cloud

The AI agent ecosystem is exploding. LangChain, OpenClaw, AutoGen, OpenHands, Claude Code, Aider, GPT-Engineer, MetaGPT, BabyAGI, Devon - new frameworks launch every week.

The challenge every developer faces: getting these frameworks to actually run. They need specific Python versions, system dependencies, persistent storage, network access, and often hours of environment configuration. And when they run locally, they have unrestricted access to your machine.

Running them in isolated cloud workspaces solves both problems - instant setup and full isolation.


The Problem with Running Agents Locally

Security

An AI agent running on your laptop has access to everything: your files, your SSH keys, your browser cookies, your cloud credentials. If you're testing a new framework, do you really trust it with all of that?

Environment conflicts

LangChain needs one version of pydantic. OpenClaw needs another. AutoGen needs specific Microsoft packages. Running multiple frameworks on one machine means endless dependency conflicts.

Resource limits

Your laptop has 8-16 GB of RAM and a few CPU cores. Running a multi-agent system that spawns 10 agents, each with their own tools and memory, can bring your machine to a crawl.

Reproducibility

"It works on my machine" is worse for agents than for regular software. An agent's behavior depends on available tools, installed packages, and system configurations. If two developers get different results from the same agent, environment differences are the likely cause.


One-Command Setup for Any Framework

The workflow is the same regardless of framework:

  1. Create a workspace with a Python or Node.js image
  2. Install the framework (pip install, npm install)
  3. Set your API keys (environment variables)
  4. Run the agent

Because each workspace is an isolated Linux VM, there are no conflicts. Run LangChain in one workspace and OpenClaw in another - different Python versions, different dependencies, zero interference.


Framework Quick-Start Guides

LangChain / LangGraph

LangChain is the most popular agent framework with extensive tool support and chain composition.

Create a Python workspace, install langchain and langchain-openai (or your preferred LLM provider), set your OPENAI_API_KEY environment variable, and run your agent script.

LangChain agents work great in workspaces because they get:

  • Full filesystem access for document loading and RAG
  • Network access for web tools and API calls
  • Persistent storage for vector databases (ChromaDB, FAISS)
  • Enough memory for large document processing

OpenClaw

OpenClaw orchestrates multiple AI agents working together as a "crew" - each agent has a role, goal, and set of tools.

Install OpenClaw and OpenClaw-tools, configure your LLM keys, define your agents and tasks, and let the crew run. Each agent in the crew runs within the same workspace, sharing the filesystem for collaboration.

For production OpenClaw deployments, use workload management to keep the crew running as a background process with auto-restart.

AutoGen (Microsoft)

AutoGen enables multi-agent conversations where agents collaborate through chat. Install pyautogen, configure your model endpoints, and set up agent conversations.

AutoGen workloads benefit from larger workspaces (2-4 CPU, 4-8 GB RAM) since multiple agents run concurrently in the same process.

OpenHands (formerly OpenDevin)

OpenHands is an autonomous AI agent that writes code, manages files, and runs commands - similar to Devin.

It's particularly well-suited to workspace environments because it's designed to work inside a development environment. Give it a workspace with your codebase, and it treats the workspace as its development machine.

Claude Code

Claude Code is Anthropic's AI coding agent that works in a terminal. Create a Node.js workspace, install @anthropic-ai/claude-code, set your Anthropic API key, and launch it.

Claude Code gets full terminal access inside the workspace - it can install packages, create files, run builds, and test code. All safely isolated from your local machine.

Aider

Aider is a terminal-based AI pair programming tool. Install via pip, point it at your repository, and start coding together.

Aider works especially well in workspaces because it needs:

  • Git access (pre-installed in most images)
  • Full filesystem access
  • Terminal interaction
  • Network access for model API calls

Multi-Framework Architecture

For complex projects, run different frameworks in different workspaces:

┌────────────────┐     ┌────────────────┐     ┌────────────────┐
│  Workspace A    │     │  Workspace B    │     │  Workspace C    │
│                 │     │                 │     │                 │
│  OpenClaw         │────→│  Claude Code    │────→│  Production     │
│  (orchestrator) │     │  (code writer)  │     │  (final app)    │
│                 │     │                 │     │                 │
│  Plans the      │     │  Writes the     │     │  Runs the       │
│  work           │     │  code           │     │  result         │
└────────────────┘     └────────────────┘     └────────────────┘
       Connected via private networking
  • OpenClaw in Workspace A - orchestrates the project, decides what to build
  • Claude Code in Workspace B - receives coding tasks, implements them
  • Production Workspace C - runs the final application

Private networking connects them. Each workspace is isolated but can communicate securely.


Custom Images for Repeat Use

If you frequently use the same framework setup, create a custom image:

  1. Start with a base image (Python 3.13)
  2. Install your framework and all dependencies
  3. Pre-download models or embeddings
  4. Configure default settings
  5. Snapshot the workspace

Next time, create a workspace from the snapshot - everything is pre-installed, boot to running agent in seconds.


Workload Management for Production Agents

For agents that need to run continuously (not just one-off tasks):

Create the agent as a workload with an auto-restart policy. The agent runs as a managed background process. If it crashes, it restarts automatically. You can view logs, monitor resource usage, and manage lifecycle - all without SSH.

Multiple agents in one workspace:

  • Workload 1: Orchestrator agent (LangChain, always running)
  • Workload 2: Worker agent (OpenClaw, restarts on failure)
  • Workload 3: Monitoring agent (custom Python, always running)

Each workload is independently managed with its own restart policy and log stream.


Resource Recommendations by Framework

FrameworkRecommended CPURecommended RAMNotes
LangChain1-21-2 GBMore for RAG with large docs
OpenClaw22-4 GBMultiple agents in one process
AutoGen2-44-8 GBMulti-agent conversations use memory
OpenHands2-44 GBRuns builds and tests
Claude Code22 GBTerminal-based, moderate usage
Aider1-21-2 GBLightweight
Custom multi-agent4+8+ GBScale with agent count

Security Considerations

Running any AI agent framework carries risk - the agent runs arbitrary code. In a workspace:

  • Code execution is sandboxed - the agent can only affect its own workspace
  • Network is controlled - restrict outbound access to only the APIs the agent needs
  • Credentials are isolated - API keys in one workspace aren't visible to others
  • Cleanup is automatic - delete the workspace and everything (including the encryption key) is gone
  • No local machine risk - your laptop, SSH keys, and browser sessions are safe

This is especially important when testing new or experimental frameworks that you haven't fully vetted.


Summary

Run any AI agent framework in the cloud:

  1. Create a workspace - Python or Node.js image, takes ~130ms
  2. Install the framework - pip or npm, 30 seconds
  3. Set API keys - environment variables
  4. Run the agent - full Linux environment, full isolation
  5. Scale up - more CPU/RAM as needed, multiple workloads for multiple agents
  6. Clean up - delete the workspace, everything is gone

No conflicts. No security risks. No "works on my machine." Just agents running in their own isolated environments.

Related readingMulti-Agent System Architecture | Run Claude Code in the Cloud | Oblien Documentation