Blog

  • How to Use OpenClaw to Manage and Monitor Multiple Websites at Once

    If you’re running OpenClaw to manage and monitor multiple websites, especially in a distributed setup across several servers, you’ve likely encountered the challenge of centralizing the operational data and ensuring consistent performance. The default setup for OpenClaw often assumes a single instance monitoring a few targets. When scaling up to dozens or even hundreds of websites, often with geographically diverse hosting, you need a more robust strategy for data collection, error reporting, and resource management. We’re going to dive into how to leverage OpenClaw’s internal mechanisms and some external tooling to achieve this, focusing on practical, actionable steps rather than theoretical architectures.

    Affiliate Disclosure: As an Amazon Associate, we earn from qualifying purchases. This means we may earn a small commission when you click our links and make a purchase on Amazon. This comes at no extra cost to you and helps support our site.

    Centralizing Monitoring Data with Custom Hooks

    OpenClaw provides powerful internal hooks that allow you to extend its functionality without modifying the core source code. For multi-site monitoring, the key is to capture the output of each monitoring run and send it to a centralized logging or metrics system. While OpenClaw writes logs to /var/log/openclaw/openclaw.log by default, parsing these across multiple instances becomes unwieldy. A better approach is to use the --post-run-script argument or configure a custom post-run hook in your configuration.

    Let’s say you have multiple OpenClaw instances, each monitoring a specific set of websites. For example, openclaw-us-east-1 monitors your US-based sites, and openclaw-eu-west-1 handles European ones. You want to send all critical alerts and performance metrics to a central InfluxDB instance for real-time dashboards and Grafana visualizations. Here’s how you can do it.

    First, create a simple Python script, let’s call it send_to_influxdb.py, that parses OpenClaw’s JSON output (which you can get via --json-output) and pushes it to InfluxDB. You’ll need the influxdb-client library installed (`pip install influxdb-client`).

    
    # /usr/local/bin/send_to_influxdb.py
    import sys
    import json
    from influxdb_client import InfluxDBClient, Point
    from influxdb_client.client.write_api import SYNCHRONOUS
    
    # Configuration for your InfluxDB instance
    INFLUX_URL = "http://your-influxdb-ip:8086"
    INFLUX_TOKEN = "your-influxdb-token"
    INFLUX_ORG = "your-org"
    INFLUX_BUCKET = "openclaw_metrics"
    
    def send_data(data):
        client = InfluxDBClient(url=INFLUX_URL, token=INFLUX_TOKEN, org=INFLUX_ORG)
        write_api = client.write_api(write_options=SYNCHRONOUS)
    
        for check_result in data.get("checks", []):
            point = (
                Point("website_check")
                .tag("site_name", check_result.get("name", "unknown"))
                .tag("status", check_result.get("status", "unknown"))
                .field("response_time_ms", check_result.get("response_time_ms", 0))
                .field("http_status_code", check_result.get("http_status_code", 0))
                .field("success", 1 if check_result.get("success", False) else 0)
            )
            write_api.write(bucket=INFLUX_BUCKET, org=INFLUX_ORG, record=point)
        
        client.close()
    
    if __name__ == "__main__":
        if not sys.stdin.isatty():
            input_json = sys.stdin.read()
            try:
                data = json.loads(input_json)
                send_data(data)
            except json.JSONDecodeError as e:
                print(f"Error decoding JSON: {e}", file=sys.stderr)
            except Exception as e:
                print(f"Error sending to InfluxDB: {e}", file=sys.stderr)
    

    Make sure this script is executable: chmod +x /usr/local/bin/send_to_influxdb.py. Now, you can integrate this into your OpenClaw configuration. Edit your ~/.openclaw/config.json on each OpenClaw instance:

    
    {
      "api_key": "sk-...",
      "model": "claude-haiku-20240307",
      "checks": [
        {
          "name": "My Main Website",
          "url": "https://example.com",
          "interval": "5m"
        },
        {
          "name": "Another Service",
          "url": "https://service.example.net",
          "interval": "10m"
        }
      ],
      "post_run_hook": "/usr/local/bin/send_to_influxdb.py"
    }
    

    When OpenClaw completes a monitoring run, it will pipe its JSON output to this script, which then forwards the data to your central InfluxDB. This allows you to build a single Grafana dashboard that visualizes the status and performance of all your websites, regardless of which OpenClaw instance is monitoring them.

    Choosing the Right AI Model for Cost-Effectiveness

    One of the non-obvious insights when scaling OpenClaw is the significant impact of your chosen AI model on operational costs. The OpenClaw documentation often suggests using the default model or a high-tier model like claude-opus-20240229 for its superior understanding. While this is excellent for complex analysis or infrequent tasks, for routine health checks and anomaly detection across hundreds of websites, it’s often overkill and prohibitively expensive.

    Through extensive testing, I’ve found that claude-haiku-20240307 is a phenomenal sweet spot. It’s significantly cheaper – often 10x or more than Opus – and provides sufficient intelligence for 90% of typical website monitoring tasks. Its ability to parse error messages, detect subtle changes in content, and summarize issues remains highly effective for standard HTTP status code checks, content validation, and even basic log analysis if you feed it the right data via custom check outputs. Unless you’re asking OpenClaw to write a detailed post-mortem report for every minor outage, Haiku will serve you well and keep your API costs manageable.

    To switch to Haiku, simply update your ~/.openclaw/config.json:

    
    {
      "api_key": "sk-...",
      "model": "claude-haiku-20240307",
      "checks": [
        ...
      ],
      "post_run_hook": "/usr/local/bin/send_to_influxdb.py"
    }
    

    This single change can drastically reduce your monthly OpenAI/Anthropic bill, especially when running multiple OpenClaw instances on frequent intervals.

    Resource Management and Limitations

    Running multiple OpenClaw instances, even with the cost-optimized Haiku model, still requires careful consideration of system resources. Each OpenClaw run involves a network request to the target website, potentially content parsing, and then an API call to the AI provider. While OpenClaw itself is relatively lightweight, the cumulative effect across many checks can strain smaller systems.

    This multi-instance, centralized monitoring approach works best on Virtual Private Servers (VPS) with at least 2GB of RAM and 1-2 vCPUs per OpenClaw instance if you’re running frequent checks (e.g., every 1-5 minutes for dozens of sites). On a smaller scale, like a Raspberry Pi, you will struggle. Raspberry Pis (especially older models) have limited RAM and slower I/O, which can lead to missed checks, delayed processing, or even OpenClaw processes being killed by the OOM (Out Of Memory) killer if your monitoring intervals are too aggressive or you’re checking too many complex websites.

    If you’re deploying on a VPS, ensure you monitor the CPU, memory, and network I/O of your OpenClaw instances. Tools like htop, sar, or integrating with your cloud provider’s monitoring (e.g., Hetzner Cloud Console’s built-in graphs) will provide valuable insights into potential bottlenecks. If you see sustained high CPU usage or memory nearing exhaustion, it’s a sign to either scale up your VPS, reduce the number of checks per instance, or increase the monitoring interval.

    Furthermore, ensure your network connectivity from the VPS to your target websites and to the AI API endpoint is stable. Intermittent network issues

  • Security Best Practices for Running OpenClaw on a VPS

    If you’re running OpenClaw on a VPS and concerned about its security, especially when exposing its API or web interface to the internet, it’s critical to lock down your system. While OpenClaw itself is designed with security in mind, the environment it runs in often dictates the weakest link. Here, we’ll cover practical steps to harden your OpenClaw deployment on a typical Linux VPS, focusing on firewalls, user management, and secure API access.

    Affiliate Disclosure: As an Amazon Associate, we earn from qualifying purchases. This means we may earn a small commission when you click our links and make a purchase on Amazon. This comes at no extra cost to you and helps support our site.

    Isolating OpenClaw with a Dedicated User

    Running OpenClaw directly as root or your primary user is a common mistake that can have severe implications if a vulnerability is exploited. The principle of least privilege dictates that OpenClaw should run under its own dedicated user account, with minimal permissions. This confines any potential breach to a limited scope.

    First, create a new user and home directory for OpenClaw. Let’s call this user openclaw_user:

    sudo adduser openclaw_user --home /opt/openclaw --shell /bin/bash

    Next, switch to this user and install OpenClaw. This ensures all OpenClaw-related files and configurations are owned by openclaw_user:

    sudo su - openclaw_user
    git clone https://github.com/openclaw/openclaw.git .
    pip install -r requirements.txt
    exit

    If you’ve already installed OpenClaw, you’ll need to transfer ownership. Let’s assume OpenClaw is currently in /home/youruser/openclaw. You’d move it and then change ownership:

    sudo mv /home/youruser/openclaw /opt/
    sudo chown -R openclaw_user:openclaw_user /opt/openclaw

    Crucially, ensure that the openclaw_user does not have sudo privileges. Verify this by checking the /etc/sudoers file or the /etc/sudoers.d/ directory. If you see an entry for openclaw_user, remove it. This user should only be able to perform actions necessary for OpenClaw’s operation.

    Firewalling with UFW

    A firewall is your first line of defense. By default, most VPS providers offer a basic firewall, but configuring UFW (Uncomplicated Firewall) directly on the host gives you granular control. We want to allow only essential traffic: SSH for administration and the specific port OpenClaw listens on (default 8000 for its API, or 80/443 if you’re using a reverse proxy).

    First, enable UFW and configure basic rules:

    sudo ufw allow ssh # Or your custom SSH port, e.g., sudo ufw allow 2222/tcp
    sudo ufw allow 8000/tcp # Allow OpenClaw API access
    sudo ufw enable

    The sudo ufw enable command will prompt you to confirm enabling the firewall. Do so carefully, as misconfiguring UFW can lock you out of your server. Always ensure SSH is allowed *before* enabling it.

    If you’re using a reverse proxy like Nginx or Caddy (highly recommended for production deployments, as detailed below), OpenClaw would typically only need to listen on 127.0.0.1:8000, and the reverse proxy would handle public-facing ports 80/443. In this scenario, you would NOT expose port 8000 publicly. Instead, your UFW rules would look like this:

    sudo ufw allow ssh
    sudo ufw allow http # For port 80
    sudo ufw allow https # For port 443
    sudo ufw enable

    This is a much more secure setup, as OpenClaw’s internal API is not directly exposed to the internet.

    Securing API Access and UI

    OpenClaw’s API and web UI often require authentication. Even if you’re not exposing the API directly to the internet, you should secure it. The OpenClaw configuration file, typically .openclaw/config.json in the user’s home directory (/opt/openclaw/.openclaw/config.json if following the dedicated user setup), is where you’d manage API keys and potentially UI authentication.

    A common mistake is to hardcode API keys directly into environment variables or scripts without proper protection. OpenClaw allows you to specify API keys for different models. Ensure these keys are strong and, ideally, stored in a secrets management system, even if it’s just a file with restricted permissions. For a VPS, consider using environment variables loaded from a file owned by openclaw_user with chmod 600 permissions.

    For example, in your .openclaw/config.json, you might reference environment variables:

    {
      "api_keys": {
        "openai": "${OPENAI_API_KEY}",
        "anthropic": "${ANTHROPIC_API_KEY}"
      },
      "web_ui": {
        "enabled": true,
        "require_auth": true,
        "username": "admin",
        "password_hash": "pbkdf2:sha256:..."
      }
    }

    Then, create an environment file, say /opt/openclaw/.env:

    OPENAI_API_KEY="sk-YOUR_OPENAI_KEY"
    ANTHROPIC_API_KEY="sk-ant-YOUR_ANTHROPIC_KEY"

    Make sure this .env file has restricted permissions:

    sudo chmod 600 /opt/openclaw/.env
    sudo chown openclaw_user:openclaw_user /opt/openclaw/.env

    You would then modify your systemd service file (see below) to load these environment variables. The non-obvious insight here is that while OpenClaw’s docs might show direct key entry, using environment variables loaded from a securely permissioned file provides a layer of separation and makes key rotation easier without modifying the core config.

    Using a Reverse Proxy with SSL/TLS

    Directly exposing OpenClaw’s HTTP server is generally a bad idea for production. A reverse proxy like Nginx or Caddy offers significant benefits:

    • SSL/TLS Encryption: Encrypts all traffic between your users and OpenClaw, preventing eavesdropping. Let’s Encrypt makes this free and easy.
    • Request Filtering: Can block malicious requests before they even reach OpenClaw.
    • Load Balancing: If you ever scale to multiple OpenClaw instances.
    • Centralized Logging: Nginx/Caddy logs provide more detailed access information.

    Here’s a basic Nginx configuration snippet for proxying to OpenClaw, assuming it’s running on 127.0.0.1:8000:

    server {
        listen 80;
        server_name your.openclaw.domain;
    
        location / {
            proxy_pass http://127.0.0.1:8000;
            proxy_set_header Host $host;
            proxy_set_header X-Real-IP $remote_addr;
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
            proxy_set_header X-Forwarded-Proto $scheme;
        }
    
        # Add SSL configuration after obtaining a certificate (e.g., with Certbot)
    }

    After setting up this Nginx config (typically in /etc/nginx/sites-available/your.openclaw.domain and symlinked to /etc/nginx/sites-enabled/), restart Nginx and then run Certbot to automatically configure SSL:

    sudo systemctl restart nginx
    sudo certbot --nginx -d your.openclaw.domain

    This will automatically modify your Nginx configuration to use HTTPS. Ensure your UFW rules allow ports 80 and 443.

    This setup works well even on VPS with limited resources (e.g., 1GB RAM) as Nginx is very lightweight. The only limitation might be if your OpenClaw instance itself is resource-intensive due to the models you’re running. A Raspberry Pi, for instance, might struggle with the combined load of OpenClaw, a reverse proxy, and a full operating system if you’re running very large language models locally.

    Regular Updates and Monitoring

    Security is

  • OpenClaw Community Skills Review: Which ClawHub Skills Are Actually Useful?

    If you’re diving into the OpenClaw ecosystem and wondering which skills from ClawHub are worth your time and server resources, you’re not alone. The ClawHub marketplace is brimming with community-contributed skills, but not all are created equal. Some are incredibly niche, others are resource hogs, and a few are just plain broken or poorly maintained. This guide cuts through the noise to highlight the skills that consistently prove useful in real-world OpenClaw deployments, focusing on practical applications and performance considerations.

    Essential Utility Skills

    Let’s start with the workhorses – skills that provide fundamental capabilities often overlooked but crucial for robust OpenClaw operations. The clawhub/system-monitor skill is a prime example. While OpenClaw itself provides some basic logging, this skill integrates with common Linux utilities to give you a clearer picture of your system’s health. Specifically, it leverages htop and df -h, making their output accessible directly through OpenClaw’s API. To enable it, you’ll need to install htop if it’s not already present on your system:

    sudo apt update && sudo apt install htop -y

    Then, in your ~/.openclaw/config.json, add the skill definition under the skills array:

    {
      "api_key": "your_openclaw_api_key",
      "base_url": "http://localhost:8000",
      "skills": [
        {
          "name": "system-monitor",
          "path": "clawhub/system-monitor"
        }
      ],
      "model": "claude-haiku-4-5"
    }

    The non-obvious insight here is that while the default OpenClaw dashboard gives you CPU/RAM usage, system-monitor provides historical context and disk space utilization, which is invaluable for diagnosing issues like logs filling up your root partition or a runaway process consuming all I/O. For instance, if your OpenClaw instance suddenly becomes unresponsive, a quick check with system-monitor.get_system_status() via the API will immediately tell you if you’re out of disk space or if a background task is thrashing your CPU.

    Another often-underestimated utility is clawhub/file-manager. While security-conscious users might be wary of giving an AI direct file system access, for local development or controlled environments, it’s a lifesaver. It allows OpenClaw to read, write, and delete files within a specified sandbox directory. This is particularly useful for tasks like processing data files, generating reports, or managing configuration for other applications OpenClaw might be orchestrating. Configure it like this:

    {
      "api_key": "your_openclaw_api_key",
      "base_url": "http://localhost:8000",
      "skills": [
        {
          "name": "file-manager",
          "path": "clawhub/file-manager",
          "config": {
            "base_directory": "/var/openclaw/data"
          }
        }
      ],
      "model": "claude-haiku-4-5"
    }

    Crucially, base_directory is not optional. If you omit it, the skill will fail to load with a permissions error, or worse, default to a less secure location. Always explicitly define a dedicated directory for OpenClaw’s file operations. This skill makes it trivial for OpenClaw to, for example, read a CSV, process its contents with a custom Python script (invoked via clawhub/python-executor), and then write the results to a new file, all without manual intervention.

    Practical Integration Skills

    OpenClaw truly shines when it can interact with external services. The clawhub/http-requester skill is absolutely fundamental for this. It allows OpenClaw to make arbitrary HTTP GET, POST, PUT, and DELETE requests. This means you can integrate with virtually any RESTful API – from sending notifications to a Slack channel to triggering webhooks on other services. While it might seem obvious, many users initially try to embed API calls directly into their Python executor scripts, which is less efficient and harder for OpenClaw to reason about. The dedicated skill provides a structured way for OpenClaw to understand and manage these interactions.

    {
      "api_key": "your_openclaw_api_key",
      "base_url": "http://localhost:8000",
      "skills": [
        {
          "name": "http-requester",
          "path": "clawhub/http-requester"
        }
      ],
      "model": "claude-haiku-4-5"
    }

    A common mistake is forgetting to handle API keys or authentication headers when using http-requester. While the skill itself doesn’t manage secrets, you can either hardcode them (not recommended for production) or have OpenClaw retrieve them from environment variables or a secure vault skill (if you implement one). For instance, to send a Slack message, your OpenClaw prompt might involve calling http-requester.post() with the appropriate Slack webhook URL and JSON payload. The non-obvious insight: OpenClaw’s internal reasoning engine often constructs more robust and error-resistant API calls when it has a dedicated, well-defined tool like http-requester rather than parsing an arbitrary Python script to find HTTP calls.

    For more specific integrations, clawhub/github-manager is excellent if your OpenClaw instance is involved in code management or CI/CD pipelines. It allows OpenClaw to create issues, pull requests, comment on PRs, and even fetch repository contents. This is particularly useful for automated bug reporting or for an AI assistant to help developers with common GitHub tasks. However, this skill requires careful configuration of GitHub tokens:

    {
      "api_key": "your_openclaw_api_key",
      "base_url": "http://localhost:8000",
      "skills": [
        {
          "name": "github-manager",
          "path": "clawhub/github-manager",
          "config": {
            "github_token": "ghp_YOUR_PERSONAL_ACCESS_TOKEN",
            "owner": "your_github_username",
            "repo": "your_repository_name"
          }
        }
      ],
      "model": "claude-haiku-4-5"
    }

    The token needs to have the correct scopes (e.g., repo for full access, or more granular scopes for specific actions). Failing to set these will lead to mysterious 403 Forbidden errors. This skill is overkill if you just need to clone a repo once; for that, clawhub/shell-executor (with git clone) is simpler. github-manager shines when OpenClaw needs to dynamically interact with GitHub’s API, like creating an issue for every failed test run reported by another system.

    Performance and Resource Considerations

    When selecting ClawHub skills, always be mindful of the resources they consume. Skills like clawhub/python-executor and clawhub/shell-executor are incredibly powerful but can also be resource intensive, especially if the commands or scripts they execute are long-running or memory-hungry. Running complex data processing via python-executor on a VPS with less than 2GB RAM can quickly lead to out-of-memory errors and OpenClaw crashes, especially if your OpenClaw model itself is also memory-intensive. For such tasks, consider offloading to a dedicated worker or using an external system that OpenClaw triggers via http-requester.

    Another point of consideration is the model you pair with these skills. While the documentation might suggest larger models for complex reasoning, for 90% of skill-based tasks, a smaller, faster model like claude-haiku-4-5 is often sufficient and significantly cheaper. OpenClaw’s tool-use capabilities are robust enough that even simpler models can effectively call and parse skill outputs, reserving larger models for more open-ended, creative tasks that don’t rely heavily on specific skill invocations.

    Finally, always test new skills in a staging environment. Some community skills might have unoptimized code or dependencies that conflict with your existing OpenClaw setup. Monitor your VPS’s CPU, RAM, and disk I/O when introducing new skills to catch performance regressions early.

    To start exploring these useful skills, your next concrete step is to add the clawhub/system-monitor skill to your ~/.openclaw/config.json file, along with the htop installation, and restart OpenClaw.

  • How to Test OpenClaw Skills Before Deploying to Production

    If you’re building an OpenClaw agent and want to thoroughly test its skills before it starts interacting with real users or production systems, you’ve likely hit the wall of “how do I simulate complex scenarios without breaking things or incurring huge API costs?” The standard openclaw test command is great for unit-level checks, but it falls short when you need to orchestrate multi-step interactions, test failure recovery, or evaluate performance under load. This guide will walk you through a practical, cost-effective approach to creating a robust testing environment for your OpenClaw agents, focusing on a local, containerized setup that mirrors production closely enough to be reliable.

    Affiliate Disclosure: As an Amazon Associate, we earn from qualifying purchases. This means we may earn a small commission when you click our links and make a purchase on Amazon. This comes at no extra cost to you and helps support our site.

    Setting Up a Local Testing Environment with Docker Compose

    The core of our testing strategy is to create a isolated, repeatable environment. Docker Compose is your best friend here. It allows you to define and run multi-container Docker applications, which is perfect for simulating external services your OpenClaw agent might interact with. We’re going to set up a local OpenClaw instance, a mock API server, and optionally a local database.

    First, create a directory for your testing environment, say openclaw-test-env/. Inside, create a docker-compose.yml file:

    
    version: '3.8'
    services:
      openclaw-agent:
        build:
          context: .
          dockerfile: Dockerfile.agent
        environment:
          OPENCLAW_CONFIG: /app/.openclaw/config.json
          # IMPORTANT: Use a local API key for testing, or mock the API key entirely
          OPENCLAW_API_KEY: "sk-local-test-key"
        volumes:
          - ./agent_data:/app/.openclaw
          - ./agent_code:/app/skills
        ports:
          - "8000:8000" # If your agent exposes an API
        depends_on:
          - mock-api
        command: openclaw run --port 8000 # Or whatever command starts your agent
    
      mock-api:
        build:
          context: .
          dockerfile: Dockerfile.mockapi
        ports:
          - "3000:3000" # Port for your mock API
        environment:
          MOCK_DATA_PATH: /app/mock_data.json
        volumes:
          - ./mock_data.json:/app/mock_data.json
    
      # Optional: A local PostgreSQL database
      postgres:
        image: postgres:15
        environment:
          POSTGRES_DB: testdb
          POSTGRES_USER: testuser
          POSTGRES_PASSWORD: testpassword
        ports:
          - "5432:5432"
        volumes:
          - pgdata:/var/lib/postgresql/data
    
    volumes:
      pgdata:
    

    You’ll need two Dockerfiles: Dockerfile.agent for your OpenClaw agent and Dockerfile.mockapi for a simple mock API server. For Dockerfile.agent, it might look like this:

    
    FROM python:3.10-slim-buster
    WORKDIR /app
    COPY requirements.txt .
    RUN pip install --no-cache-dir -r requirements.txt
    RUN pip install openclaw
    COPY .openclaw/config.json .openclaw/config.json
    COPY skills/ ./skills/
    CMD ["openclaw", "run"]
    

    For Dockerfile.mockapi, you could use a simple Flask or Node.js server. Here’s a Flask example:

    
    FROM python:3.9-slim-buster
    WORKDIR /app
    COPY requirements.txt .
    RUN pip install Flask
    COPY mock_api.py .
    COPY mock_data.json .
    CMD ["python", "mock_api.py"]
    

    And mock_api.py:

    
    from flask import Flask, jsonify, request
    import json
    import os
    
    app = Flask(__name__)
    MOCK_DATA_PATH = os.environ.get('MOCK_DATA_PATH', 'mock_data.json')
    
    @app.route('/api/data', methods=['GET'])
    def get_data():
        with open(MOCK_DATA_PATH, 'r') as f:
            data = json.load(f)
        return jsonify(data)
    
    @app.route('/api/update', methods=['POST'])
    def update_data():
        new_data = request.json
        with open(MOCK_DATA_PATH, 'r+') as f:
            data = json.load(f)
            data.update(new_data)
            f.seek(0)  # Rewind to the beginning
            json.dump(data, f, indent=4)
            f.truncate() # Truncate any remaining old content
        return jsonify({"status": "success", "updated_data": new_data})
    
    if __name__ == '__main__':
        app.run(host='0.0.0.0', port=3000)
    

    The mock_data.json will contain the initial state for your mock API. This setup allows your OpenClaw agent to make requests to http://mock-api:3000 within the Docker network, simulating real external service interactions.

    Crafting Realistic Test Scenarios

    The real power comes from how you define your tests. Forget about simple unit tests. We’re thinking integration and end-to-end. Your OpenClaw agent’s .openclaw/config.json needs to be adapted to point to your local mock services. For example:

    
    {
      "llm_provider": {
        "name": "anthropic",
        "model": "claude-3-haiku-20240307",
        "api_key_env": "OPENCLAW_API_KEY",
        "base_url": "http://localhost:8000/mock-llm-proxy"
      },
      "tools": [
        {
          "name": "fetch_data",
          "type": "api",
          "base_url": "http://mock-api:3000",
          "endpoints": {
            "get_data": "/api/data",
            "update_data": "/api/update"
          }
        },
        {
          "name": "database_tool",
          "type": "database",
          "driver": "postgresql",
          "host": "postgres",
          "port": 5432,
          "database": "testdb",
          "user": "testuser",
          "password_env": "POSTGRES_PASSWORD"
        }
      ]
    }
    

    The non-obvious insight here is to specifically configure your local OpenClaw agent to use a cheaper, faster LLM model for testing. While the production environment might demand claude-3-opus-20240229 for maximum reasoning, for 90% of your skill testing, claude-3-haiku-20240307 or even a local open-source LLM (via an LM Studio or Ollama proxy if you have sufficient local resources) is sufficient and drastically reduces costs and latency during development. OpenClaw is designed to be model-agnostic, so if your skill logic is sound, it should transfer between models.

    For your test scripts, you’ll be interacting with the OpenClaw agent’s API directly. If your agent is set up to expose an HTTP endpoint (e.g., via openclaw run --port 8000), you can use curl or a Python requests library to send prompts and receive responses. This allows you to simulate user interaction.

    Consider a scenario where your agent needs to fetch data, process it, and then update an external system. Your test script would:

    1. Start the Docker Compose environment: docker compose up -d
    2. (Optional) Initialize the mock API’s mock_data.json or the PostgreSQL database with a specific state.
    3. Send a prompt to your OpenClaw agent: requests.post('http://localhost:8000/chat', json={'prompt': 'Please fetch the latest data and summarize it, then update the status to "processed".'})
    4. Poll the OpenClaw agent’s status or wait for a response.
    5. Assert the final state of the mock API (e.g., by making a GET
  • Migrating From ChatGPT Plugins to OpenClaw: What You Gain and What Changes

    If you’re an agency or a developer who built custom integrations using ChatGPT Plugins and are now looking for a more stable, cost-effective, and extensible solution, migrating to OpenClaw is a logical next step. OpenAI deprecating plugins means your existing integrations are on borrowed time. OpenClaw offers a robust alternative, but the migration isn’t a simple drop-in replacement. You’ll gain significant control and flexibility, but you’ll also need to adapt your mindset from a black-box plugin model to a more open, agent-oriented architecture.

    Affiliate Disclosure: As an Amazon Associate, we earn from qualifying purchases. This means we may earn a small commission when you click our links and make a purchase on Amazon. This comes at no extra cost to you and helps support our site.

    Understanding the Core Architectural Shift

    The biggest change when moving from ChatGPT Plugins to OpenClaw is the shift from a “plugin-as-a-service” model to a “local agent orchestrator.” ChatGPT Plugins handled the entire lifecycle – discovery, execution, and response parsing – entirely within OpenAI’s infrastructure. You merely registered your API schema and let OpenAI manage the rest. OpenClaw, on the other hand, runs locally (or on your own server) and acts as an orchestration layer for your tools and models. This means you gain direct control over model selection, tool definition, and how your agent interacts with external services.

    For example, a ChatGPT Plugin might have had a manifest like this:

    {
      "schema_version": "v1",
      "name_for_model": "weather_plugin",
      "name_for_human": "Weather Plugin",
      "description_for_model": "Provides real-time weather information.",
      "auth": {
        "type": "none"
      },
      "api": {
        "type": "openapi",
        "url": "https://api.myweatherapp.com/.well-known/openapi.yaml"
      },
      "logo_url": "https://example.com/logo.png",
      "contact_email": "support@example.com",
      "legal_info_url": "https://example.com/legal"
    }
    

    In OpenClaw, your “tools” (the equivalent of plugin functionalities) are defined directly in Python and registered with your agent. You’re no longer pointing to an external OpenAPI spec that OpenAI consumes. Instead, you’re explicitly defining Python functions that OpenClaw’s agent can call. This gives you granular control over input validation, error handling, and even pre/post-processing logic right within your OpenClaw setup.

    Defining Tools in OpenClaw

    Let’s say your ChatGPT Plugin provided a `get_current_weather` function. In OpenClaw, you’d define this as a Python function and expose it to your agent. Here’s a basic example of how you’d define a tool:

    # tools.py
    import requests
    from openclaw.tools import tool
    
    @tool
    def get_current_weather(location: str) -> str:
        """
        Get the current weather in a given location.
        Args:
            location: The city and state, e.g. San Francisco, CA
        """
        try:
            api_key = "YOUR_WEATHER_API_KEY" # Replace with your actual key
            url = f"http://api.openweathermap.org/data/2.5/weather?q={location}&appid={api_key}&units=metric"
            response = requests.get(url)
            response.raise_for_status()
            data = response.json()
            temp = data['main']['temp']
            description = data['weather'][0]['description']
            return f"The current temperature in {location} is {temp}°C with {description}."
        except requests.exceptions.RequestException as e:
            return f"Error fetching weather for {location}: {e}"
        except KeyError:
            return f"Could not parse weather data for {location}. Is the location valid?"
    
    # In your agent configuration:
    # from tools import get_current_weather
    #
    # agent = OpenClawAgent(
    #     model="claude-haiku-4-5",
    #     tools=[get_current_weather],
    #     # ... other config
    # )
    

    Notice how `location: str` is explicitly typed. OpenClaw leverages type hints to automatically generate schema for the LLM, much like OpenAPI. The docstring provides the `description_for_model` that was previously in your plugin manifest, allowing the LLM to understand when to use the tool.

    Cost Optimization and Model Flexibility

    One of the immediate benefits you’ll gain with OpenClaw is the ability to choose your LLM provider and specific model. With ChatGPT Plugins, you were locked into OpenAI’s models. OpenClaw allows you to integrate with various providers like Anthropic, OpenAI, or even local models. This is crucial for cost optimization. While the default inclination might be to use the latest, most powerful model, for many plugin-like tasks (e.g., retrieving data, triggering actions), a smaller, faster, and significantly cheaper model is often sufficient.

    For instance, while your initial setup might default to gpt-4o, for tasks that primarily involve calling a single tool and returning a formatted response, a model like Anthropic’s claude-haiku-4-5 is often 10x cheaper and just as effective. You configure this in your agent’s initialization:

    from openclaw.agents import OpenClawAgent
    
    # ... import your tools
    
    agent = OpenClawAgent(
        model="anthropic/claude-haiku-4-5", # Specify provider/model
        tools=[get_current_weather, ...],
        # ... other configurations
    )
    

    Experimentation is key here. Start with a cost-effective model and only upgrade if you observe a significant drop in performance or tool-calling reliability for your specific use cases. This granular control over the underlying LLM is something you simply couldn’t achieve with the deprecated ChatGPT Plugins.

    Handling State and Custom Logic

    ChatGPT Plugins were largely stateless from the perspective of the plugin itself; the state was managed by the chat interface. With OpenClaw, because you’re running the orchestrator, you have far greater control over state management and custom logic. You can integrate databases, caching layers, or complex business logic directly into your tool functions or the agent’s pre/post-processing hooks. This is particularly powerful for scenarios where plugins felt too constrained or required multiple round-trips to achieve a complex outcome.

    For example, if your old plugin needed to remember user preferences or past interactions, you would have relied on the chat history passed to the plugin. With OpenClaw, you can store this information directly within your application’s state or a database accessible by your tools, leading to more intelligent and context-aware interactions without burdening the LLM with excessive context window usage.

    Limitations and Resource Considerations

    While OpenClaw offers significant advantages, it’s not a magic bullet. The main limitation is that you’re now responsible for running the orchestration layer. This means resource consumption. A basic OpenClaw agent running with a few tools might be light, but if you intend to run multiple agents concurrently or integrate with very large language models locally (which is not the common pattern for tool-calling), you need adequate resources. This setup will typically run smoothly on any VPS with at least 2GB RAM, like a Hetzner CX11. Attempting to run OpenClaw with multiple complex agents on something like a Raspberry Pi will likely result in slow responses and memory exhaustion, especially if you’re trying to integrate with local inference engines.

    Furthermore, while OpenClaw simplifies tool definition, you are now responsible for the full development lifecycle of those tools – from writing the Python code to handling dependencies and deployment. This is a trade-off: more control for more responsibility.

    The transition from ChatGPT Plugins to OpenClaw is a move towards a more robust, controlled, and cost-efficient agent-driven architecture. Embrace the shift from external manifest files to explicit Python tool definitions, and leverage the model flexibility for significant cost savings.

    To get started, define your first OpenClaw tool by creating a tools.py file with a function decorated with @tool and then instantiate your agent with it: agent = OpenClawAgent(model="anthropic/claude-haiku-4-5", tools=[your_tool_function]).

  • OpenClaw Sub-Agent Architecture: Running Multiple AI Tasks in Parallel

    If you’re using OpenClaw for more than just simple, single-shot tasks, you’ve probably hit a wall when trying to manage multiple, ongoing AI workflows. The default OpenClaw setup, while robust for individual agent runs, doesn’t inherently parallelize complex, multi-stage operations very well. This often leads to a bottleneck where a single long-running task hogs resources, preventing other, potentially more urgent, agents from executing. Imagine you’re running a market sentiment analysis agent that processes daily news, an automated customer support agent that handles incoming queries, and a content generation agent that drafts blog posts – all simultaneously. Without a proper sub-agent architecture, these tasks will often queue up, leading to delays and missed opportunities.

    Understanding the Problem with Single-Threaded Agents

    By default, when you initiate an OpenClaw agent, say with openclaw run my_main_agent.py, that script typically encompasses the entire logic for a specific task. If that task involves multiple steps – fetching data, processing it with one LLM, then refining the output with another – it’s all handled sequentially within that single agent’s execution context. This is perfectly fine for many use cases. However, when you introduce concurrency requirements, like needing to process ten independent data streams or respond to five simultaneous user queries, the single agent model quickly becomes a bottleneck. Your main agent script ends up being a monolithic entity trying to manage disparate concerns, often leading to complex state management and error handling within one file, making it harder to debug and scale.

    The core issue is resource contention and sequential execution. If your my_main_agent.py is busy fetching a large dataset or waiting for a slow LLM response, any other logical “task” within that same script has to wait. Even if you try to manage multiple internal “workflows” with async/await patterns in Python, you’re still constrained by the overall execution context of that single agent process. For true parallel processing, especially across different LLM calls that might have varying latencies or rate limits, you need a more distributed approach.

    Implementing a Sub-Agent Architecture

    The solution lies in breaking down your complex workflows into smaller, independent OpenClaw agents – let’s call them “sub-agents” – each responsible for a specific, atomic part of the overall process. The main agent then acts as an orchestrator, dispatching tasks to these sub-agents and collecting their results. This mimics a microservices architecture, but within the OpenClaw ecosystem.

    Here’s how you structure it:

    1. Orchestrator Agent: This is your primary OpenClaw agent. Its role is to define the overall workflow, determine which sub-agents need to run, and manage their execution.
    2. Sub-Agents: These are individual OpenClaw agent scripts (e.g., data_fetch_agent.py, sentiment_analysis_agent.py, summary_generation_agent.py). Each sub-agent should encapsulate a single, well-defined task. They are designed to be run independently and return a specific output.

    To make this work, you’ll leverage OpenClaw’s ability to run agents programmatically and, crucially, to pass data between them. The orchestrator won’t just execute sub-agents; it will often need to feed them inputs and consume their outputs.

    Orchestrator Configuration and Logic

    Let’s say your main agent is orchestrator_agent.py. Inside this agent, you’ll define the sequence of operations. Instead of directly calling LLMs for every step, you’ll invoke other OpenClaw agents. OpenClaw provides a programmatic API for this, but a more robust way for persistent, decoupled execution is to use file-based communication or a lightweight message queue if your setup permits. For simplicity and reliability on a VPS, we’ll stick to file-based communication.

    Your orchestrator_agent.py might look something like this:

    
    import json
    import subprocess
    import os
    import time
    
    class OrchestratorAgent:
        def __init__(self):
            self.output_dir = "/var/openclaw/outputs"
            os.makedirs(self.output_dir, exist_ok=True)
    
        def run_sub_agent(self, agent_name, input_data):
            input_file = os.path.join(self.output_dir, f"{agent_name}_input.json")
            output_file = os.path.join(self.output_dir, f"{agent_name}_output.json")
    
            with open(input_file, "w") as f:
                json.dump(input_data, f)
    
            # Run the sub-agent in the background or wait for it
            # For true parallelism, you'd daemonize this or use a process pool
            # For simplicity here, we'll run it synchronously and wait.
            # To run truly async and check for completion, you'd manage PIDs.
            print(f"Running sub-agent: {agent_name} with input: {input_file}")
            command = [
                "openclaw", "run",
                f"agents/{agent_name}.py",
                "--input-file", input_file,
                "--output-file", output_file
            ]
            
            try:
                # Use Popen to run asynchronously if you want to dispatch multiple
                # and then collect. For this example, we'll wait.
                process = subprocess.run(command, capture_output=True, text=True, check=True)
                print(f"Sub-agent {agent_name} finished. STDOUT: {process.stdout} STDERR: {process.stderr}")
    
                if os.path.exists(output_file):
                    with open(output_file, "r") as f:
                        return json.load(f)
                else:
                    print(f"Warning: {output_file} not found after {agent_name} execution.")
                    return {"error": "output file not found"}
            except subprocess.CalledProcessError as e:
                print(f"Error running sub-agent {agent_name}: {e}")
                print(f"Sub-agent STDOUT: {e.stdout}")
                print(f"Sub-agent STDERR: {e.stderr}")
                return {"error": str(e), "stdout": e.stdout, "stderr": e.stderr}
            finally:
                # Clean up input file if no longer needed
                if os.path.exists(input_file):
                    os.remove(input_file)
                # You might want to keep output files for debugging or auditing
    
        def run(self):
            # Example workflow
            data_to_process = {"text": "The stock market saw a significant rally today, boosting investor confidence."}
    
            # Step 1: Fetch data (might be a simple pass-through or actual fetching logic)
            fetched_data = self.run_sub_agent("data_fetcher", data_to_process)
            if fetched_data and not fetched_data.get("error"):
                print(f"Fetched data: {fetched_data.get('content', 'N/A')}")
            else:
                print("Data fetch failed, exiting.")
                return
    
            # Step 2: Analyze sentiment
            sentiment_result = self.run_sub_agent("sentiment_analyzer", {"text": fetched_data.get("content")})
            if sentiment_result and not sentiment_result.get("error"):
                print(f"Sentiment: {sentiment_result.get('sentiment', 'N/A')}")
            else:
                print("Sentiment analysis failed, exiting.")
                return
    
            # Step 3: Generate a summary based on sentiment
            summary_input = {
                "text": fetched_data.get("content"),
                "sentiment": sentiment_result.get("sentiment")
            }
            summary_result = self.run_sub_agent("summary_generator", summary_input)
            if summary_result and not summary_result.get("error"):
                print(f"Summary: {summary_result.get('summary', 'N/A')}")
            else:
                print("Summary generation failed, exiting.")
                return
    
            print("Workflow completed successfully!")
    
    # To run this orchestrator:
    if __name__ == "__main__":
        agent = OrchestratorAgent()
        agent.run()
    

    Sub-Agent Structure

    Each sub-agent needs to be designed to read its input from a specified file (--input-file) and write its output to another specified file (--output-file). This standardized interface is critical for the orchestrator to interact with them effectively. Here’s an example of a sentiment_analyzer.py sub-agent:


    import json
    import argparse
    import os
    from openclaw import OpenClaw

    class SentimentAnalyzerAgent:

  • How to Use OpenClaw for Social Media Scheduling and Cross-Posting

    If you’re managing multiple social media accounts and struggling to keep up with consistent posting, OpenClaw can be a game-changer for automating your workflow. Forget about context switching between different platforms or paying for expensive, feature-bloated scheduling tools. We can leverage OpenClaw’s agent capabilities to draft posts, handle platform-specific formatting, and even manage cross-posting logic based on your content strategy. The core idea here is to define agents that understand the nuances of each platform – character limits, image requirements, hashtag conventions – and then give them a high-level content prompt.

    Affiliate Disclosure: As an Amazon Associate, we earn from qualifying purchases. This means we may earn a small commission when you click our links and make a purchase on Amazon. This comes at no extra cost to you and helps support our site.

    Setting Up Your OpenClaw Agents for Social Media

    The first step is to define your agents. You’ll want one for each platform you’re targeting, or at least one for content generation and separate ones for platform-specific adaptation. For this example, let’s assume we want to post to Twitter (X) and LinkedIn. We’ll use a main “Content Creator” agent and then “Twitter Adapter” and “LinkedIn Adapter” agents.

    Your .openclaw/config.json will need to be structured to define these agents. Pay close attention to the model and system_prompt for each. I’ve found that claude-haiku-4-5 is remarkably cost-effective and performs well for drafting and adapting social media content, especially when given a clear system prompt. Don’t fall into the trap of always using the most expensive model; for 90% of these tasks, Haiku is more than sufficient and significantly reduces API costs.

    
    {
      "agents": {
        "ContentCreator": {
          "model": "claude-haiku-4-5",
          "system_prompt": "You are an expert social media content creator. Your goal is to draft engaging and concise posts based on user prompts. Focus on clarity, value, and a strong call to action if applicable. Do not include platform-specific formatting yet.",
          "temperature": 0.7
        },
        "TwitterAdapter": {
          "model": "claude-haiku-4-5",
          "system_prompt": "You are a Twitter (X) specialist. Your task is to adapt a given piece of content for Twitter. Ensure it adheres to character limits (max 280, ideally less for engagement), incorporates relevant hashtags (2-3), and encourages interaction. Do NOT include any intro or outro text, just the tweet content.",
          "temperature": 0.5
        },
        "LinkedInAdapter": {
          "model": "claude-haiku-4-5",
          "system_prompt": "You are a LinkedIn specialist. Adapt the given content for a professional LinkedIn post. Focus on business value, professional tone, and add relevant industry hashtags (3-5). LinkedIn posts can be longer but should still be engaging. Do NOT include any intro or outro text, just the LinkedIn content.",
          "temperature": 0.5
        }
      },
      "default_agent": "ContentCreator",
      "api_keys": {
        "anthropic": "sk-your-anthropic-key"
      }
    }
    

    Note the explicit instructions in the system_prompt for the adapter agents: “Do NOT include any intro or outro text, just the tweet content.” This is crucial. Without it, you’ll often get responses like “Here is your tweet:” or “I have adapted the content for LinkedIn:”. These are unnecessary and waste tokens, and you’ll end up having to manually clean them up.

    Automating the Content Flow with OpenClaw Workflows

    Once your agents are defined, you can create a workflow that chains them together. This is where OpenClaw truly shines for automation. We’ll create a .openclaw/workflows/social_post.yaml file.

    
    workflow:
      name: Social Media Content Generator
      description: Generates content for Twitter and LinkedIn from a single prompt.
    
      steps:
        - name: Generate Core Content
          agent: ContentCreator
          input: "{{ prompt }}"
          output_to: core_content
    
        - name: Adapt for Twitter
          agent: TwitterAdapter
          input: "{{ core_content }}"
          output_to: twitter_post
    
        - name: Adapt for LinkedIn
          agent: LinkedIn
          input: "{{ core_content }}"
          output_to: linkedin_post
    
      output:
        twitter: "{{ twitter_post }}"
        linkedin: "{{ linkedin_post }}"
    

    To run this workflow, you’d use the OpenClaw CLI:

    
    openclaw run social_post --prompt "Write a post about the benefits of using OpenClaw for developer productivity, focusing on automation and cost savings for small teams."
    

    The output will then contain structured JSON with both the Twitter and LinkedIn versions of your post. You can then pipe this output to further scripts that interface with the actual social media APIs (e.g., Tweepy for Twitter, LinkedIn’s API for LinkedIn), or even just copy-paste them manually into your scheduling tool of choice.

    Handling Images and Advanced Scheduling

    OpenClaw currently excels at text generation and transformation. For images, you’d integrate a separate step. You could, for example, have another agent that suggests image prompts or even integrates with a DALL-E or Midjourney API if you’ve set up a custom tool for OpenClaw. A practical approach is to generate the text first, and then have a human or a separate script select an appropriate image from a pre-curated library based on the text’s keywords.

    For actual scheduling, you’ll need to use external tools. OpenClaw generates the content; it doesn’t store state for future posting times. You could integrate the output of your OpenClaw workflow into a simple Python script that stores the posts in a database with scheduled times, and then uses a cron job to trigger the actual API calls at the right moment. This gives you maximum flexibility without OpenClaw needing to manage complex scheduling logic directly.

    It’s important to be honest about limitations. This setup, while powerful, doesn’t handle image uploads or direct API posting natively within OpenClaw. You’ll need additional scripts for those. Also, this approach works best if you have a decent amount of RAM (at least 2GB is recommended for smooth operation, especially if you’re running multiple agents or long prompts). Trying to run complex OpenClaw workflows on a Raspberry Pi with 1GB RAM might lead to slow responses or even crashes if other processes are memory-hungry.

    The non-obvious insight here is that separating content creation from platform adaptation, and then using a cost-effective model like Haiku for both, yields excellent results at a fraction of the cost of larger models. Many users default to Opus or GPT-4 for everything, but for routine tasks like social media, the smaller models, when properly prompted, are incredibly efficient.

    To get started, update your .openclaw/config.json with the agent definitions provided and then create the .openclaw/workflows/social_post.yaml file.

  • The Best OpenClaw AGENTS.md Setup I’ve Found After Testing 5 Versions

    If you’ve been experimenting with OpenClaw and feeling like your agents are either too verbose, not following instructions, or just plain expensive, you’re not alone. I’ve spent the last month iterating on my AGENTS.md file, trying to find the sweet spot between performance, cost, and output quality. The official examples are a great starting point, but they often lead to agents that overthink or consume too many tokens. This isn’t just about prompt engineering; it’s about structuring the AGENTS.md file itself to leverage OpenClaw’s capabilities efficiently. My setup, which I’ll detail below, focuses on a multi-agent approach where each agent has a very specific, limited role, and communication is explicit.

    Affiliate Disclosure: As an Amazon Associate, we earn from qualifying purchases. This means we may earn a small commission when you click our links and make a purchase on Amazon. This comes at no extra cost to you and helps support our site.

    The Problem with Monolithic Agents

    My initial approach, like many I’ve seen, was to create a single “Super Agent” designed to handle an entire complex task from start to finish. For example, an agent named ContentCreator would be responsible for generating an article idea, outlining it, writing sections, and then refining the whole thing. While this sounds intuitive, it quickly became problematic. The context window would balloon, leading to higher token usage and increased latency. More critically, the agent would often “forget” earlier instructions or get sidetracked, requiring extensive prompt tuning and frequent human intervention. Debugging became a nightmare because it was hard to pinpoint exactly where the agent went off track. The cost also escalated rapidly, especially when using larger models like claude-opus-20240229.

    Introducing the Specialized Multi-Agent Framework

    The breakthrough for me came when I realized OpenClaw excels when agents are highly specialized and tasks are broken down into granular steps. Instead of one large agent, I now use a chain of smaller, focused agents. Each agent has a single responsibility, and they pass their output to the next agent in the chain. This mirrors a traditional software pipeline and makes debugging significantly easier. If the final output is bad, I can examine the output of each preceding agent to find the bottleneck. This also allows for more flexible model selection; I can use cheaper, faster models for simpler tasks and reserve more powerful (and expensive) models for steps requiring deeper reasoning.

    Here’s a snippet of my refined AGENTS.md structure:

    # AGENTS.md
    
    

    Agent: TaskDeconstructor

    Model: claude-haiku-20240307 Temperature: 0.2 System Prompt: You are an expert task breakdown specialist. Given a user's high-level request, you will break it down into a list of specific, actionable sub-tasks. Each sub-task should be clear enough for another specialized agent to execute independently. Focus on logical sequential steps. Instructions:
  • Output a numbered list of sub-tasks.
  • Do not add any conversational filler.
  • Agent: ResearchAssistant

    Model: claude-sonnet-20240229 Temperature: 0.3 System Prompt: You are a diligent research assistant. Your goal is to gather relevant information for a given sub-task. You will use the available `search` tool extensively. Synthesize findings concisely. Tools: - search Instructions:
  • Given a sub-task, use the `search` tool to find 2-3 credible sources.
  • Summarize the key findings relevant to the sub-task.
  • Cite your sources clearly.
  • Agent: ContentGenerator

    Model: claude-sonnet-20240229 Temperature: 0.7 System Prompt: You are a creative content writer. Your task is to generate high-quality, engaging content based on the provided research and a specific sub-task. Focus on clarity, accuracy, and tone. Instructions:
  • Based on the provided research and sub-task, write the content.
  • Ensure the content flows logically.
  • Maintain a consistent tone (e.g., informative, persuasive, casual).
  • Agent: Editor

    Model: claude-haiku-20240307 Temperature: 0.1 System Prompt: You are a meticulous editor. Your job is to refine, proofread, and improve the clarity, grammar, and style of the provided content. Ensure it meets the specified requirements. Instructions:
  • Review the content for grammar, spelling, punctuation, and syntax errors.
  • Improve sentence structure and word choice.
  • Ensure the content is concise and easy to read.
  • Check for consistency in tone and style.
  • The Non-Obvious Insight: Model Selection for Each Step

    This is where the real cost savings and performance gains come in. The documentation often suggests picking a single model for your agent. However, with a specialized multi-agent setup, you can be much more strategic. For instance, my TaskDeconstructor uses claude-haiku-20240307. Haiku is incredibly fast and cheap, and for simply breaking down a task into a list, it performs perfectly. There’s no need for Opus here. Similarly, the Editor agent, focused on refinement and grammar, also benefits from Haiku’s speed and precision without needing a large context window or complex reasoning. Its primary job is pattern matching and correction, which Haiku handles very well.

    For agents like ResearchAssistant and ContentGenerator, I opt for claude-sonnet-20240229. Sonnet strikes an excellent balance between cost and capability. The research phase often requires synthesizing information from multiple sources, and content generation needs a certain level of creativity and coherence. Opus would certainly do a stellar job, but for 90% of my use cases, Sonnet is more than sufficient and significantly cheaper per token. The key is to match the model’s capabilities to the agent’s specific function, not the overall task complexity.

    Managing State and Communication

    One of the biggest challenges with chained agents is ensuring smooth communication and state management. OpenClaw handles this implicitly when you chain agents in your workflow (e.g., `openclaw run TaskDeconstructor -> ResearchAssistant -> ContentGenerator -> Editor`). Each agent receives the output of the previous one as its input. However, it’s crucial to instruct each agent clearly on what to expect as input and what format its output should take for the next agent. For example, TaskDeconstructor outputs a numbered list, which ResearchAssistant is then prompted to act upon, typically iteratively.

    I also use a simple convention: each agent’s output should be as “clean” as possible – no conversational filler, just the raw, processed data or content. This minimizes token usage for subsequent agents and prevents unnecessary context from accumulating. The Instructions section in each agent’s definition is paramount for this.

    Limitations and When This Falls Short

    This multi-agent strategy is highly effective for tasks that can be broken down into discrete, sequential steps. However, it’s not a silver bullet. If your task requires extremely deep, multi-faceted reasoning that spans across several steps and requires the agent to maintain a very complex internal state or perform highly iterative, non-linear problem-solving, a single, more powerful agent (like Opus) with a larger context window might still be necessary. For instance, deeply technical coding tasks or complex scientific simulations might push the limits of this chained approach, as the explicit context passing might become cumbersome or lose nuance.

    Furthermore, this setup is most efficient on a VPS with at least 2GB of RAM, especially if you’re running multiple OpenClaw processes concurrently or dealing with very large outputs. While OpenClaw itself is lightweight, the underlying LLM calls and the processing of potentially large JSON outputs can consume memory. A Raspberry Pi, while capable of running OpenClaw for simpler tasks, might struggle with a complex multi-agent pipeline processing extensive research findings.

    This setup also assumes a stable internet connection for consistent API calls. If your connection is flaky, the chained calls might fail more frequently, requiring more robust error handling in your OpenClaw scripts.

    To implement this setup, start by defining your specialized agents in your AGENTS.md file, matching models to their specific roles as described. Then, chain them together in your workflow. The most direct next step is to open your existing AGENTS.md file and begin refactoring your monolithic agents into specialized, single-responsibility units, and then update your workflow script to chain them together, for example: openclaw run TaskDeconstructor -> ResearchAssistant -> ContentGenerator -> Editor --request "Write a blog post about the benefits of specialized AI agents."

  • OpenClaw for Developers: API Access, Webhooks, and Scripting Your Own Tools

    If you’re using OpenClaw to automate parts of your development workflow and find yourself needing more than just the UI, you’ve likely hit the wall of manual intervention. OpenClaw is powerful out of the box, but its true potential for developers lies in programmatic access. This note will walk you through leveraging OpenClaw’s API, setting up webhooks for real-time notifications, and scripting your own tools to integrate OpenClaw into your existing CI/CD pipelines, monitoring systems, or custom dashboards. We’re going to make OpenClaw a silent partner in your backend, not just a browser tab.

    OpenClaw API Access: The Foundation

    The first step to scripting OpenClaw is understanding how to interact with its API. Unlike some tools that hide their API behind complex SDKs, OpenClaw provides a straightforward RESTful interface. To get started, you’ll need an API key. Navigate to your OpenClaw instance, typically at http://localhost:8080, log in, and then go to Settings > API Keys. Generate a new key and make sure to copy it immediately, as it won’t be shown again.

    Most API interactions will require this key in an Authorization header as a Bearer token. Let’s say you want to list all active claws (our term for an automated task definition). You’d use a simple GET request. Here’s a typical curl command:

    
    curl -X GET \
      http://localhost:8080/api/v1/claws \
      -H 'Authorization: Bearer YOUR_API_KEY_HERE' \
      -H 'Content-Type: application/json'
    

    The response will be a JSON array of claw objects, each containing its ID, name, status, and configuration details. This is your entry point for programmatically querying the state of your OpenClaw instance. You can then use these IDs to interact with specific claws, for example, to trigger them:

    
    curl -X POST \
      http://localhost:8080/api/v1/claws/CLAW_ID_HERE/trigger \
      -H 'Authorization: Bearer YOUR_API_KEY_HERE' \
      -H 'Content-Type: application/json' \
      -d '{}'
    

    One non-obvious insight here: while the documentation might suggest creating a new claw for every single ephemeral task, it’s often more efficient for common, repetitive tasks to have a single “template” claw and simply pass different parameters via the API’s trigger endpoint using the -d '{"parameters": {"key": "value"}}' flag. This reduces the overhead of creating and deleting claws dynamically and keeps your OpenClaw instance cleaner. This approach works best for claws that perform similar actions but operate on different data points.

    Webhooks for Real-time Notifications

    Polling the API constantly for status updates is inefficient and can quickly hit rate limits on busy OpenClaw instances. This is where webhooks shine. OpenClaw allows you to configure webhooks to send HTTP POST requests to a specified URL whenever certain events occur, such as a claw completing, failing, or a new task being initiated.

    To set up a webhook, you’ll configure it directly within your OpenClaw instance. Navigate to Settings > Webhooks. Click “Add New Webhook.” You’ll need to provide:

    • URL: The endpoint your application exposes to receive the webhook payload.
    • Secret (Optional): A shared secret to sign the webhook payload, allowing your application to verify the sender’s authenticity. This is crucial for security and should always be used in production environments.
    • Events: A selection of events that will trigger the webhook (e.g., claw.completed, claw.failed, task.started).

    A common use case is integrating OpenClaw task failures with your existing monitoring and alerting stack, like PagerDuty or a custom Slack integration. Instead of writing a script that periodically checks the status of all claws, your webhook endpoint will receive an immediate notification with all the relevant details (claw ID, task ID, error message, etc.) when a claw.failed event occurs. Your endpoint can then parse this JSON payload and trigger an alert. Remember that your webhook endpoint needs to return a 2xx HTTP status code quickly to acknowledge receipt; any heavy processing should be offloaded to an asynchronous queue.

    The webhook payload typically looks something like this for a claw.completed event:

    
    {
      "event": "claw.completed",
      "timestamp": "2023-10-27T10:30:00Z",
      "clawId": "clw_abcdef12345",
      "clawName": "DailyReportGenerator",
      "taskId": "tsk_ghijkl67890",
      "status": "success",
      "result": {
        "outputFile": "/reports/daily/2023-10-27.pdf",
        "recordsProcessed": 1234
      }
    }
    

    The non-obvious insight here is that when developing your webhook endpoint, always log the raw incoming payload first. The exact structure and content of the result field can vary significantly depending on how your claw is configured and what it returns. Don’t assume a fixed schema for this field; design your parser to be flexible or at least gracefully handle missing keys.

    Scripting Your Own Tools

    With API access and webhooks, you have all the building blocks to script powerful custom tools. Python is an excellent choice for this, given its rich ecosystem for HTTP requests and JSON parsing. Let’s outline a simple Python script that monitors a specific claw and re-triggers it if it fails more than N times within an hour.


    import requests
    import os
    import time
    from collections import deque

    OPENCLAW_API_URL = os.getenv("OPENCLAW_API_URL", "http://localhost:8080/api/v1")
    OPENCLAW_API_KEY = os.environ.get("OPENCLAW_API_KEY")

    if not OPENCLAW_API_KEY:
    raise ValueError("OPENCLAW_API_KEY environment variable not set.")

    HEADERS = {
    "Authorization": f"Bearer {OPENCLAW_API_KEY}",
    "Content-Type": "application/json"
    }

    def get_claw_status(claw_id):
    try:
    response = requests.get(f"{OPENCLAW_API_URL}/claws/{claw_id}", headers=HEADERS)
    response.raise_for_status()
    return response.json()
    except requests.exceptions.RequestException as e:
    print(f"Error fetching status for claw {claw_id}: {e}")
    return None

    def trigger_claw(claw_id):
    try:
    response = requests.post(f"{OPENCLAW_API_URL}/claws/{claw_id}/trigger", headers=HEADERS, json={})
    response.raise_for_status()
    print(f"Claw {claw_id} re-triggered successfully.")
    return response.json()
    except requests.exceptions.RequestException as e:
    print(f"Error triggering claw {claw_id}: {e}")
    return None

    def main(claw_id, max_failures=3, monitor_window_seconds=3600):
    failure_timestamps = deque()
    print(f"Monitoring claw {claw_id} for failures...")

    while True:
    status = get_claw_status(claw_id)
    if status and status.get("status") == "failed":
    current_time = time.time()
    failure_timestamps.append(current_time)

    # Remove failures outside the monitoring window
    while failure_timestamps and failure_timestamps[0] < current_time - monitor_window_seconds: failure_timestamps.popleft() if len(failure_timestamps) >= max_failures:
    print(f"Claw {claw_id} failed {len(failure_timestamps)} times within the last hour. Triggering re-run.")
    trigger_claw(claw_id)
    # Clear failures after a re-trigger to prevent infinite loops
    failure_timestamps.clear()
    else:
    print(f"Claw {claw_id} failed. {len(failure_timestamps)} failures in window.")

    time.sleep(60) # Check every minute

    if __name__ == "__main__":
    target_claw_id = os.getenv("TARGET_CLAW_ID")
    if not target_claw_

  • How to Set Up OpenClaw Cron Jobs for Scheduled Automation

    If you’re running OpenClaw on a server and you want to automate tasks like daily report generation, data synchronization, or periodic content updates, you’ll quickly realize that manually triggering scripts is unsustainable. OpenClaw itself doesn’t have a built-in scheduler, but because it’s designed for command-line execution, integrating it with standard Linux cron jobs is straightforward and robust. This allows you to set up complex automation flows that run reliably in the background, freeing you from manual intervention.

    Affiliate Disclosure: As an Amazon Associate, we earn from qualifying purchases. This means we may earn a small commission when you click our links and make a purchase on Amazon. This comes at no extra cost to you and helps support our site.

    Understanding Cron Jobs for OpenClaw Automation

    Cron is a time-based job scheduler in Unix-like operating systems. It enables users to schedule commands or scripts to run automatically at a specified date and time. For OpenClaw, this means you can schedule any OpenClaw CLI command or a shell script that wraps multiple OpenClaw commands. The key is to ensure your cron environment is correctly configured to find OpenClaw and its dependencies, which is often where users encounter issues.

    Let’s say you have an OpenClaw script, generate_daily_summary.py, which uses OpenClaw to process some data and output a summary. Instead of running python3 generate_daily_summary.py manually every day, you can add it to your crontab. The crontab command allows you to edit your user’s cron table. To start, open your crontab for editing:

    crontab -e

    This will open a text editor (usually vi or nano) with your current crontab entries. Each line in the crontab represents a scheduled job and follows a specific format:

    * * * * * command_to_be_executed

    The five asterisks represent (in order): minute (0-59), hour (0-23), day of month (1-31), month (1-12), and day of week (0-7, where 0 or 7 is Sunday). For example, to run your summary script every day at 3:00 AM, you would add:

    0 3 * * * /usr/bin/python3 /path/to/your/openclaw_scripts/generate_daily_summary.py >> /var/log/openclaw_summary.log 2>&1

    Notice the full paths to python3 and your script. This is crucial because cron jobs often run with a minimal PATH environment variable, meaning commands that work interactively might fail in cron if not fully qualified. Also, redirecting stdout and stderr to a log file (>> /var/log/openclaw_summary.log 2>&1) is essential for debugging, as cron jobs run silently in the background.

    Handling Environment Variables and Virtual Environments

    One of the most common pitfalls when setting up OpenClaw cron jobs is the difference in environment variables between your interactive shell and the cron environment. If you’ve installed OpenClaw in a Python virtual environment, simply calling python3 might not activate the correct environment or find the necessary libraries.

    To address this, you have a couple of robust options:

    1. Source the virtual environment within the cron job: This is generally the most reliable method.
    2. Use the full path to the virtual environment’s Python interpreter: Simpler for single scripts.

    Let’s assume your OpenClaw virtual environment is located at /home/user/openclaw_env/. Your script generate_daily_summary.py might look something like this, using the OpenClaw CLI directly:

    # generate_daily_summary.py
    import subprocess
    
    def main():
        command = [
            "/home/user/openclaw_env/bin/openclaw",
            "chat",
            "create",
            "--model", "claude-haiku-4-5",
            "--prompt", "Generate a summary of today's server logs.",
            "--input-file", "/var/log/syslog",
            "--output-file", "/var/log/daily_summary.txt"
        ]
        
        result = subprocess.run(command, capture_output=True, text=True)
        
        if result.returncode == 0:
            print("Summary generated successfully.")
            print(result.stdout)
        else:
            print("Error generating summary:")
            print(result.stderr)
    
    if __name__ == "__main__":
        main()
    

    In your crontab, you could then specify the Python interpreter directly from your virtual environment:

    0 3 * * * /home/user/openclaw_env/bin/python3 /path/to/your/openclaw_scripts/generate_daily_summary.py >> /var/log/openclaw_summary.log 2>&1

    Alternatively, if your script is simpler and just calls openclaw directly, you might need to source your virtual environment. Create a small wrapper shell script, e.g., run_openclaw_summary.sh:

    #!/bin/bash
    source /home/user/openclaw_env/bin/activate
    /home/user/openclaw_env/bin/openclaw chat create --model claude-haiku-4-5 --prompt "Generate a summary of today's server logs." --input-file /var/log/syslog --output-file /var/log/daily_summary.txt >> /var/log/openclaw_summary.log 2>&1
    deactivate # Optional, but good practice if you don't need the env active later
    

    Make sure this script is executable: chmod +x /path/to/your/run_openclaw_summary.sh. Then, your crontab entry would be:

    0 3 * * * /path/to/your/run_openclaw_summary.sh

    This wrapper script handles the environment activation, making the cron job more robust.

    Non-Obvious Insight: API Key Management and Cost Monitoring

    When running OpenClaw with cron jobs, especially on a VPS, you’re interacting with external APIs. Your API keys are usually set as environment variables or in .openclaw/config.json. If using environment variables, ensure they are present in the cron job’s environment. You can add them directly to your crontab file at the top:

    MAILTO="your_email@example.com" # Get email alerts for failures
    ANTHROPIC_API_KEY="sk-..."
    OPENAI_API_KEY="sk-..."
    
    0 3 * * * /path/to/your/run_openclaw_summary.sh
    

    However, storing sensitive API keys directly in crontab is generally discouraged for security reasons. A better approach is to ensure the user running the cron job (usually your user) has these variables set in their ~/.profile or ~/.bashrc, and then explicitly source one of these files in your wrapper script before calling OpenClaw. For example, add source ~/.profile at the beginning of run_openclaw_summary.sh if your API keys are defined there.

    The non-obvious insight here is cost monitoring. Automated OpenClaw tasks can quickly rack up API usage. Always specify the cheapest model that meets your needs. While the OpenClaw documentation might suggest powerful models for general use, for 90% of summarization, classification, or data extraction tasks, claude-haiku-4-5 or gpt-3.5-turbo are dramatically cheaper and often sufficient. For example, Anthropic’s Haiku model is significantly more cost-effective than Opus for most routine tasks. Integrate OpenClaw’s output with simple log parsing or a custom script that calls your API provider’s usage endpoints to get daily cost estimates.

    Limitations and Resource Considerations

    This approach works well on most Linux-based VPS systems with sufficient resources. A VPS with at least 1GB RAM is generally recommended for even light OpenClaw tasks, as Python scripts and LLM API calls can be memory-intensive, especially with larger input files or more complex models. If you’re running on something like a Raspberry Pi, be mindful of CPU and RAM. While it might work for very simple, infrequent tasks, heavy daily automation could easily overwhelm it, leading to missed cron jobs or system instability. Ensure your scripts are optimized and that you’re not trying to process enormous files or make hundreds of API calls simultaneously on underpowered hardware.

    Finally, remember that cron jobs run in a non-interactive shell. If your OpenClaw script requires any form