If you’re looking to extend OpenClaw’s capabilities beyond its built-in commands and the official skill marketplace, creating a custom skill is the way to go. This note will walk you through the process, from defining the skill’s structure to integrating it into your OpenClaw instance. We’ll focus on a practical example: a skill that queries a local weather API, something not directly supported by default.
Skill Directory Structure and Boilerplate
OpenClaw skills are essentially Python modules with a specific entry point and metadata. All custom skills should reside in your ~/.openclaw/skills/ directory. If this directory doesn’t exist, create it: mkdir -p ~/.openclaw/skills/. Each skill needs its own subdirectory within this path. Let’s create one for our weather skill: mkdir -p ~/.openclaw/skills/local_weather. Inside this directory, you’ll need at least two files: __init__.py and config.json.
The config.json file defines the skill’s metadata and how OpenClaw should present it. For our local_weather skill, it would look like this:
{
"name": "Local Weather",
"description": "Fetches local weather conditions from a specified API endpoint.",
"version": "0.1.0",
"author": "Your Name",
"icon": "weather-icon.png",
"commands": [
{
"name": "get_current_weather",
"description": "Retrieves current weather conditions for a given location.",
"args": {
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "The city or geographical area to get weather for."
}
},
"required": ["location"]
}
}
]
}
The commands array is crucial here. Each object within it defines a function that OpenClaw’s AI can call. name is the Python function name, description helps the AI understand its purpose, and args defines the input parameters using JSON schema. This schema guides the AI on what arguments to provide. For our weather skill, we only need a location string.
Next, the __init__.py file contains the actual Python code for your skill. This is where the logic for fetching weather will live. For now, let’s create a minimal version:
import requests
import os
class LocalWeatherSkill:
def __init__(self):
self.api_base_url = os.getenv("LOCAL_WEATHER_API_URL", "http://localhost:8080/weather")
def get_current_weather(self, location: str) -> str:
try:
response = requests.get(f"{self.api_base_url}?location={location}")
response.raise_for_status() # Raise an exception for HTTP errors
data = response.json()
if data and "temperature" in data and "conditions" in data:
return f"Current weather in {location}: {data['temperature']}°C, {data['conditions']}."
else:
return f"Could not parse weather data for {location}."
except requests.exceptions.ConnectionError:
return f"Error: Could not connect to the local weather API at {self.api_base_url}. Is it running?"
except requests.exceptions.Timeout:
return "Error: Local weather API request timed out."
except requests.exceptions.RequestException as e:
return f"Error fetching weather: {e}"
except Exception as e:
return f"An unexpected error occurred: {e}"
# OpenClaw will instantiate this class
def get_skill_instance():
return LocalWeatherSkill()
The get_skill_instance() function at the bottom is OpenClaw’s entry point; it expects to get an instance of your skill class. Notice how we’re using os.getenv for the API URL. This is critical for keeping sensitive information or environment-specific configurations out of the code and managed via environment variables.
Handling Dependencies
Our local_weather skill uses the requests library. OpenClaw runs skills in isolated environments, but you still need to manage dependencies. The most straightforward way is to include a requirements.txt file in your skill’s directory. For our skill:
# ~/.openclaw/skills/local_weather/requirements.txt
requests==2.31.0
When OpenClaw loads your skill for the first time or detects changes, it will attempt to install these dependencies into a virtual environment specific to that skill. This is why it’s important to pin exact versions or ranges; otherwise, you might run into conflicts or unexpected behavior if a dependency updates and breaks your skill. OpenClaw uses pip for this, so standard requirements.txt syntax applies.
Environment Variables and Configuration
For skills that interact with external services or require API keys, environment variables are the recommended approach. In our weather example, we defined LOCAL_WEATHER_API_URL. To make this available to OpenClaw and, consequently, to your skill, you’ll need to set it in the environment where OpenClaw runs. If you’re running OpenClaw with systemd, you’d modify your service file. For a Hetzner VPS, this might look like:
# /etc/systemd/system/openclaw.service (example)
...
[Service]
Environment="LOCAL_WEATHER_API_URL=http://your-local-weather-service:8080/api/v1/weather"
ExecStart=/usr/local/bin/openclaw serve
...
After modifying the service file, remember to run sudo systemctl daemon-reload and sudo systemctl restart openclaw. If you’re running OpenClaw manually, simply export the variable before starting it: export LOCAL_WEATHER_API_URL="http://127.0.0.1:8080/weather" && openclaw serve.
A non-obvious insight here: while you might be tempted to put configuration directly into the __init__.py or even a skill-specific JSON file, using environment variables via os.getenv() is far more robust. It cleanly separates configuration from code, allows for easy overrides in different deployment environments (e.g., dev vs. prod), and prevents accidental commitment of sensitive data to version control. Furthermore, OpenClaw’s skill loading mechanism doesn’t directly support injecting arbitrary configuration into a skill beyond what’s defined in its config.json, so environment variables are your best bet for runtime parameters.
Testing and Debugging
Once your skill is in place, restart OpenClaw. It should automatically detect and load your new skill. You can verify this by checking OpenClaw’s logs. Look for messages indicating skill discovery and loading, typically containing the skill’s name and version. If there are dependency issues, you’ll see errors related to pip install in the logs. If the skill fails to load, OpenClaw will log the traceback from your __init__.py.
To test the skill, interact with OpenClaw naturally. Ask it: “What’s the weather like in London?” OpenClaw’s AI should recognize that it has a tool (your get_current_weather command) that can answer this query, call it with “London” as the location argument, and then return the result. If it doesn’t, inspect OpenClaw’s thought process in the logs. Often, the AI needs a clearer description in config.json or a more precise command name to correctly map user intent to your skill.
Limitations: This approach works well for skills that are primarily CPU-bound or make network calls. However, if your skill requires significant computational resources, such as a large language model or a complex computer vision model, running it directly within OpenClaw’s skill environment on a typical VPS (like a Hetzner CX11 or CX21) might struggle. The skill’s Python process inherits the OpenClaw process’s resource limits. For heavy lifting, it’s generally better to have your skill act as a client to a separate, optimized service (e.g., a dedicated GPU instance for inference) and just pass the request to that service, returning its output.
Instant download — no subscription needed