How to Deploy OpenClaw to the Cloud in One Click
Deploy the OpenClaw AI agent framework to the cloud in under a minute. No server setup, no Docker config - just a running agent instantly.
How to Deploy OpenClaw to the Cloud in One Click
OpenClaw is one of the most popular open-source frameworks for building autonomous AI agents. It handles reasoning, planning, tool use, and multi-step execution - basically everything you need to build an agent that can actually get things done.
But deploying it? That's where most people get stuck.
You need a server that stays running. You need persistent storage for agent memory. You need proper isolation because the agent executes arbitrary code. And you need it to restart automatically if it crashes.
This guide shows you how to get OpenClaw running in the cloud in about 60 seconds, with none of the usual infrastructure headaches.
What you need before you start
- An Oblien account (free tier works fine for testing)
- An API key from your LLM provider (OpenAI, Anthropic, or whichever model you use with OpenClaw)
That's it. No AWS account, no Docker knowledge, no Kubernetes cluster, no Terraform files.
Step 1: Create a workspace
Log into the Oblien dashboard and click Create Workspace. Think of a workspace as your agent's home - a persistent cloud computer that stays on and restarts automatically.
Choose these settings:
- Image:
node-22(OpenClaw runs on Node.js) - CPU: 2 cores (enough for most agent workloads)
- Memory: 4 GB
- Disk: 10 GB (room for packages, agent memory, and logs)
Hit create. Your workspace boots in about 130 milliseconds - yes, that's a full Linux VM, not a container.
Step 2: Open the terminal
Click into your workspace and open the Terminal tab. You now have a full Linux shell running in the cloud. Everything you do here persists across restarts.
Step 3: Install OpenClaw
In the terminal, run:
npm install -g openclawThat's it. OpenClaw is installed globally and ready to use. Because the workspace has a writable persistent disk, it survives reboots.
Step 4: Set your environment variables
Your agent needs API keys to talk to the LLM. In the workspace's Settings tab, add your environment variables:
OPENAI_API_KEY- Your OpenAI key (or whichever provider you use)AGENT_MODE- Set topersistentfor always-on behavior
Alternatively, create a .env file in the terminal:
echo 'OPENAI_API_KEY=sk-your-key-here' > ~/.env
echo 'AGENT_MODE=persistent' >> ~/.envStep 5: Create your agent configuration
OpenClaw uses a configuration file to define your agent's behavior. Create one:
mkdir ~/agent && cd ~/agentCreate an agent.config.js with your agent's setup - its system prompt, available tools, memory settings, and any custom behaviors. OpenClaw's documentation covers this extensively, but the basics are straightforward: define what the agent can do and let OpenClaw handle the reasoning loop.
Step 6: Run your agent as a managed process
Here's where Oblien makes things easy. Instead of using screen or tmux or writing a systemd service, use the Workloads feature to run your agent as a managed background process:
In the dashboard, go to your workspace → Workloads → Create Workload:
- Command:
openclaw start ~/agent/agent.config.js - Restart policy:
always(if it crashes, it restarts automatically)
Your agent is now running in the background. You can view its logs in real time from the dashboard, stop/restart it with a click, and it survives workspace reboots.
What you just set up (and why it matters)
Let's take a step back and appreciate what this environment gives you compared to running OpenClaw on your laptop or a bare EC2 instance:
Hardware isolation
Your agent runs inside a Firecracker microVM - its own kernel, its own memory, its own encrypted disk. Even though the agent can execute arbitrary code (that's kind of the point), it can't affect anything outside its workspace.
Persistence
The workspace stays running. Agent memory, conversation history, installed packages - everything persists. If the workspace restarts, your agent picks up where it left off.
Automatic recovery
With the restart policy set to "always," your agent restarts automatically if it crashes. No cron jobs, no process managers, no manual intervention.
No exposed ports by default
Your workspace starts with zero-trust networking. Nothing can reach it unless you explicitly allow it. This means your agent isn't accidentally exposed to the internet.
Easy monitoring
Check CPU usage, memory, disk I/O, and network traffic from the dashboard. Stream logs in real time. No need to SSH in and tail files.
Optional: Give your agent superpowers
Once your OpenClaw agent is running, you can extend what it can do using Oblien's features:
Let it create child workspaces
Give your agent an API token and it can create temporary workspaces for isolated tasks. Need to run untrusted code from a user? Spin up a workspace, execute it there, destroy it. The task workspace is completely isolated from the agent's home.
Connect it to a database
Create a second workspace running Postgres. Set up a private link so only your agent's workspace can reach it. Your AI agent now has a private database that's invisible to everything else on the internet.
Expose it publicly
If your agent needs to receive webhooks or serve an API, expose a port from the dashboard. Oblien gives you an instant HTTPS URL. When you're ready, map a custom domain.
Add SSH access
Enable SSH on the workspace for debugging. Oblien provides bastion-style access - no public IP needed. Just ssh through the Oblien gateway.
Compared to other deployment options
| Approach | Setup Time | Isolation | Persistence | Auto-Restart | Cost to Start |
|---|---|---|---|---|---|
| Local machine | Minutes | None | Yes | No | Free |
| EC2 / GCE VM | 15-30 min | Strong | Yes | Manual setup | ~$30/mo |
| Docker on VPS | 10-20 min | Weak | Depends | Manual setup | ~$10/mo |
| Railway / Render | 5-10 min | Container-level | Yes | Yes | ~$5/mo |
| Oblien workspace | ~1 min | Hardware-level | Yes | Yes | Free tier |
Troubleshooting
Agent crashes immediately? Check the workload logs in the dashboard. Most common cause: missing environment variables or wrong Node.js version.
Can't install packages? Make sure your workspace has internet access enabled (it is by default). Check disk space - if the writable disk is full, installations fail silently.
Agent is slow to respond? Check the metrics tab. If CPU is consistently at 100%, bump up to 4 cores. If memory is full, increase to 8 GB.
Need to update OpenClaw? Open the terminal, run npm update -g openclaw, then restart the workload from the dashboard.
What's next
You now have an OpenClaw agent running in the cloud on isolated infrastructure. From here, you can:
- Build more complex agent architectures - add specialist agents in separate workspaces
- Connect to databases and external services - using private links and scoped network rules
- Process user workloads - let your agent create temporary sandboxes for each task
- Scale up - bump resources from the dashboard as your usage grows
The whole point is that you focus on building the agent's capabilities. The infrastructure handles itself.
How to Connect AI Agents to Private Databases Without Exposing Them
Securely wire AI agents to private databases without exposing them to the internet. No public endpoints, no VPNs - just private networking.
How to Give Each User Their Own Isolated Sandbox in Your SaaS
Create per-user sandboxed environments for your SaaS - completely isolated, zero shared infrastructure. Ideal for code execution and AI tools.