Bring Your Own Server

Your Hardware. Our Management Layer.

Connect a VPS from DigitalOcean, Vultr, Hetzner, AWS, Akamai, or plug in any SSH-accessible box. We deploy and maintain OpenClaw — you keep full control.

Have a server ready?

Link it up and we will have OpenClaw running in minutes.

Start BYOS Setup

Why Run OpenClaw on Your Own Iron?

Retain ownership of your data, your models, and your uptime guarantees

Complete Data Sovereignty

Nothing leaves your machine. Meet GDPR, HIPAA, or any data-residency requirement without compromise.

Private AI with Local Models

Pair BYOS with GPU hardware to run Ollama, llama.cpp, or similar engines — no external API calls.

Full Root Access

Tweak kernel settings, install custom packages, or configure networking however you need. It is your box.

Choose Your Region

Pick a data center close to your users or mandated by regulations. Latency and compliance, solved.

Three Steps to a Running Bot

From bare server to live AI assistant in minutes, not hours

01

Link Your Server

Share SSH credentials or connect through your provider's API. We support DigitalOcean, Vultr, Hetzner, AWS, Akamai, and generic SSH.

02

We Install OpenClaw

Our automation provisions the full stack — runtime, database, reverse proxy, SSL. Add your AI key and connect your messaging channels.

03

Bot Goes Live

Your OpenClaw instance is up and answering messages. We handle ongoing updates and health monitoring; you keep root access at all times.

BYOS Questions, Answered

How does BYOS actually work?

You give us SSH access (or connect via a provider API). We install the OpenClaw stack, wire up your messaging channels, and keep everything updated. You keep root access the entire time — we never lock you out of your own machine.

Which cloud providers are officially supported?

We have first-class integrations for DigitalOcean, Vultr, Hetzner, AWS EC2, and Akamai (Linode). Anything else that exposes SSH works too — on-premise hardware, OVH, Oracle Cloud, you name it.

Can I run fully private AI models?

If your server has a GPU, you can run Llama models through Ollama or llama.cpp without sending a single token to an external API. Great for sensitive data or air-gapped environments.

What does BYOS cost?

The management fee starts at $29/month on our Base plan. You pay your cloud provider directly for compute. At scale, this often works out cheaper than fully managed alternatives.

Do I need to be a sysadmin?

Not really. You need to know how to spin up a VPS and share SSH credentials — we handle the heavy lifting from there. If anything goes wrong, our support team steps in.

What are the minimum server requirements?

2 vCPU, 2 GB RAM, 20 GB disk, and Ubuntu 22.04 or newer. For local LLM inference, plan for an NVIDIA GPU and additional RAM. Our docs include sizing recommendations for each workload.

Ready to Bring Your Own Server?

Plug in your hardware and let us handle the rest. Full control, zero lock-in.