If you’re looking to turn your OpenClaw instance into a personal, always-on AI assistant accessible from your phone, connecting it to Telegram is the most practical solution. The common pitfall is thinking you need complex webhooks or a full-blown web server. For most users, a simple polling mechanism combined with a systemd service is far more robust and easier to maintain, especially on a VPS where resources are shared. I’ve found this setup to be rock-solid on a Hetzner CX11, providing continuous uptime without the headaches of managing external reverse proxies.
Affiliate Disclosure: As an Amazon Associate, we earn from qualifying purchases. This means we may earn a small commission when you click our links and make a purchase on Amazon. This comes at no extra cost to you and helps support our site.
Setting up Your Telegram Bot
First, you need a Telegram bot. Talk to @BotFather on Telegram. Send him /newbot, give your bot a name (e.g., “MyOpenClawAI”) and a username (e.g., “MyOpenClaw_bot”). BotFather will give you an API token. It looks something like 123456:ABC-DEF1234ghIkl-zyx57W2v1u123ew11. Keep this token safe; it’s how your OpenClaw instance will interact with Telegram.
Next, you need to get your Telegram User ID. There are several bots for this, but @userinfobot is reliable. Just start a chat with it, and it will tell you your ID (a sequence of digits). This is crucial because you don’t want your OpenClaw bot to respond to just anyone on Telegram; you want it to be exclusively for you or a trusted group.
Configuring OpenClaw for Telegram Integration
OpenClaw doesn’t have native Telegram integration out of the box, but we can easily bridge it using a small Python script that acts as a middleware. This script will poll Telegram for new messages, pass them to OpenClaw, and then send OpenClaw’s responses back to Telegram. This approach avoids exposing OpenClaw directly to the internet, which is a significant security benefit.
Let’s create a new directory for our Telegram bridge script. On your VPS, navigate to your OpenClaw installation directory, typically ~/openclaw or /opt/openclaw. Then:
mkdir -p ~/openclaw-telegram
cd ~/openclaw-telegram
touch telegram_bridge.py
Now, open telegram_bridge.py with your favorite editor (nano telegram_bridge.py) and paste the following Python code:
import os
import time
import requests
import json
import subprocess
# --- Configuration ---
TELEGRAM_BOT_TOKEN = "YOUR_TELEGRAM_BOT_TOKEN" # Replace with your bot token
ALLOWED_USER_ID = YOUR_TELEGRAM_USER_ID # Replace with your numeric user ID
OPENCLAW_CLI_PATH = "/usr/local/bin/openclaw" # Adjust if openclaw is not in your PATH
OPENCLAW_CONFIG_PATH = "~/.openclaw/config.json" # Adjust if your config is elsewhere
POLLING_INTERVAL_SECONDS = 5 # How often to check for new messages
# --- End Configuration ---
telegram_api_base = f"https://api.telegram.org/bot{TELEGRAM_BOT_TOKEN}"
last_update_id = 0
def get_updates():
global last_update_id
try:
params = {'offset': last_update_id + 1, 'timeout': 3}
response = requests.get(f"{telegram_api_base}/getUpdates", params=params)
response.raise_for_status()
updates = response.json()['result']
if updates:
last_update_id = max(u['update_id'] for u in updates)
return updates
except requests.exceptions.RequestException as e:
print(f"Error fetching Telegram updates: {e}")
return []
def send_message(chat_id, text):
try:
params = {'chat_id': chat_id, 'text': text, 'parse_mode': 'Markdown'}
response = requests.post(f"{telegram_api_base}/sendMessage", data=params)
response.raise_for_status()
except requests.exceptions.RequestException as e:
print(f"Error sending Telegram message: {e}")
def run_openclaw(prompt):
try:
# Pass model and config explicitly for robustness
# Using claude-haiku-4-5 is often 10x cheaper than default Opus/Sonnet and sufficient.
# Adjust --model and --config as needed.
cmd = [OPENCLAW_CLI_PATH, "chat", "--prompt", prompt,
"--model", "claude-haiku-4-5",
"--config", os.path.expanduser(OPENCLAW_CONFIG_PATH)]
print(f"Running OpenClaw command: {' '.join(cmd)}")
process = subprocess.run(cmd, capture_output=True, text=True, check=True)
return process.stdout.strip()
except subprocess.CalledProcessError as e:
print(f"OpenClaw command failed: {e}")
print(f"Stderr: {e.stderr}")
return f"Error: OpenClaw failed to respond. Details: {e.stderr.strip()}"
except FileNotFoundError:
return f"Error: OpenClaw CLI not found at {OPENCLAW_CLI_PATH}. Please check the path."
except Exception as e:
return f"An unexpected error occurred while running OpenClaw: {e}"
def main():
print("OpenClaw Telegram bridge started...")
while True:
updates = get_updates()
for update in updates:
if 'message' in update and 'text' in update['message']:
message = update['message']
chat_id = message['chat']['id']
user_id = message['from']['id']
text = message['text']
if user_id != ALLOWED_USER_ID:
print(f"Received message from unauthorized user {user_id} in chat {chat_id}: {text}")
send_message(chat_id, "Sorry, I am a private bot and can only respond to my owner.")
continue
print(f"Received message from {user_id} in chat {chat_id}: {text}")
send_message(chat_id, "_Thinking..._") # Provide immediate feedback
response = run_openclaw(text)
send_message(chat_id, response)
time.sleep(POLLING_INTERVAL_SECONDS)
if __name__ == "__main__":
main()
Crucial step: Replace "YOUR_TELEGRAM_BOT_TOKEN" with the token you got from BotFather and YOUR_TELEGRAM_USER_ID with your numeric User ID. Make sure OPENCLAW_CLI_PATH points to your actual OpenClaw executable (you can find it by running which openclaw). The default ~/.openclaw/config.json usually works, but verify its location.
A non-obvious insight here: while the OpenClaw documentation might suggest using the default model for various tasks, models like claude-haiku-4-5 (or even gpt-3.5-turbo if you’re using OpenAI) are often 10x cheaper and perfectly sufficient for 90% of interactive chat tasks. For a 24/7 assistant, cost efficiency is paramount. I’ve explicitly set --model claude-haiku-4-5 in the script for this reason.
This setup works best on a VPS with at least 2GB RAM. While OpenClaw itself is relatively light, the underlying LLM calls and Python process will consume some resources. A Raspberry Pi might struggle, especially if you’re running other services or requesting very long completions.
Making it Persistent with Systemd
To ensure your Telegram bridge runs continuously and restarts automatically after crashes or reboots, we’ll use systemd. Create a service file:
sudo nano /etc/systemd/system/openclaw-telegram.service
Paste the following content, adjusting the paths for User, WorkingDirectory, and ExecStart to match your user and the script’s location:
[Unit]
Description=OpenClaw Telegram Bridge
After=network.target
[Service]
User=your_username # e.g., 'ubuntu', 'root', or your specific user
WorkingDirectory=/home/your_
Frequently Asked Questions
What is OpenClaw and what does this integration achieve?
OpenClaw is an AI system. Connecting it to Telegram allows you to access its AI assistance 24/7 directly from your chat app, providing instant support and information whenever you need it.
Why should I connect OpenClaw to Telegram for AI assistance?
This integration provides continuous, round-the-clock AI support directly within your Telegram chats. It offers unparalleled convenience, allowing you to leverage OpenClaw's capabilities for instant help, information, or task execution anytime, anywhere.
What do I need to prepare before connecting OpenClaw to Telegram?
To get started, you'll typically need an active OpenClaw instance or account, a Telegram account, and potentially a Telegram Bot API token. The article will provide detailed steps for configuration and setup.
🤖 Get the OpenClaw Automation Starter Kit (9) →
Instant download — no subscription needed
Comments
Leave a Reply