How to Set Up OpenClaw on a Hetzner VPS for Under $10/Month

You’re running a lean operation. Maybe it’s a personal knowledge base, a niche community forum, or a specialized data analysis pipeline. You need the power of an AI assistant, but the recurring cloud costs for dedicated services are eating into your budget faster than a forgotten cron job. This is where self-hosting OpenClaw on a lean Hetzner VPS becomes not just a viable option, but a genuine game-changer.

For under $10 a month, you can get a powerful, private AI companion without compromising on performance for your specific, optimized workloads. The core challenge with low-cost VPS hosting for AI is often resource allocation, especially RAM and CPU cycles for model inference. A common mistake is to try and squeeze a large language model onto a tiny instance, leading to constant swap thrashing and glacial response times. The trick here is to leverage smaller, optimized models, often quantized, and pair them with an efficient inference engine. Let’s dive into how to set this up.

Choosing Your Hetzner VPS: The Sweet Spot

Hetzner Cloud offers excellent bang for your buck, providing robust virtual servers at competitive prices. For our OpenClaw setup, we’re looking for a balance of CPU cores and RAM that won’t break the bank but can still handle model inference without choking.

  • CX11 (€4.75/month): This is our entry-level recommendation. It comes with 1 vCPU, 2 GB RAM, and 20 GB NVMe SSD. While 2GB RAM sounds modest, it’s perfectly capable of running highly quantized 7B parameter models (like Mistral 7B in a 4-bit GGUF format) for light to moderate usage.
  • CX21 (€7.90/month): If your budget allows for a little more headroom, the CX21 is the sweet spot. With 2 vCPUs, 4 GB RAM, and 40 GB NVMe SSD, it provides a significantly smoother experience, allowing for slightly larger models or more concurrent requests without performance degradation. This is often the ideal choice for a personal assistant that sees regular use.

For this guide, we’ll assume a CX11 or CX21 instance running Ubuntu 22.04 LTS, which is a stable and well-supported operating system for our purposes. When provisioning your server, ensure you set up SSH keys for secure access – password authentication should be disabled after initial setup.

Initial Server Setup and Security Hardening

Once your Hetzner VPS is provisioned, connect to it via SSH. Replace your_server_ip with your actual server IP address.

ssh root@your_server_ip

First, let’s update our system and ensure we have basic utilities.

apt update && apt upgrade -y
apt install -y curl wget git

For security, it’s best practice to create a non-root user and disable root login. We’ll also set up a basic firewall.

adduser openclawuser
usermod -aG sudo openclawuser
mkdir -p /home/openclawuser/.ssh
cp ~/.ssh/authorized_keys /home/openclawuser/.ssh/
chown -R openclawuser:openclawuser /home/openclawuser/.ssh
chmod 700 /home/openclawuser/.ssh
chmod 600 /home/openclawuser/.ssh/authorized_keys

Now, log out of root and log back in as openclawuser.

exit
ssh openclawuser@your_server_ip

Configure the Uncomplicated Firewall (UFW) to allow SSH and OpenClaw’s default port (which we’ll define later, usually 8000 or 8080).

sudo ufw allow OpenSSH
sudo ufw allow 8000/tcp # Or your chosen OpenClaw port
sudo ufw enable

When prompted, type y and press Enter. Your firewall is now active.

Installing Docker and Docker Compose

Docker is essential for our setup. It allows us to containerize OpenClaw and its inference engine, making deployment and management incredibly straightforward. Docker Compose will help us define and run multi-container applications.

Install Docker:

curl -fsSL https://get.docker.com -o get-docker.sh
sudo sh get-docker.sh
sudo usermod -aG docker openclawuser # Add your user to the docker group
newgrp docker # Apply group changes without logging out

Install Docker Compose (ensure you get the latest stable version):

sudo mkdir -p /usr/local/lib/docker/cli-plugins
sudo curl -L "https://github.com/docker/compose/releases/download/v2.24.5/docker-compose-linux-x86_64" -o /usr/local/lib/docker/cli-plugins/docker-compose
sudo chmod +x /usr/local/lib/docker/cli-plugins/docker-compose

Verify installations:

docker --version
docker compose version

Setting Up OpenClaw with Optimized Models

This is where the magic happens. OpenClaw itself is relatively lightweight, but the inference engine that runs the Large Language Models (LLMs) is the resource hog. To keep costs under $10/month, we absolutely *must* leverage quantized models and efficient inference engines.

For CPU-only inference, llama.cpp (or tools built on it like Ollama) is the gold standard. It allows us to run models in the GGUF format, which are highly optimized and quantized versions of popular models.

Let’s create a directory for our OpenClaw project and a docker-compose.yml file:

mkdir openclaw-hetzner
cd openclaw-hetzner
touch docker-compose.yml

Now, open docker-compose.yml with your favorite editor (e.g., nano docker-compose.yml) and paste the following configuration. We’ll use Ollama as our inference engine for simplicity, as it handles model downloads and serves a compatible API.

version: '3.8'

services:
ollama:
image: ollama/ollama:latest
container_name: openclaw_ollama
ports:
- "11434:11434" # Ollama API port
volumes:

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *