LLM Providers
OpenClaw is model-agnostic—it routes messages to whichever AI you configure. This guide covers setting up different LLM providers with your ClawBook instance.
Provider Overview
| Provider | Recommended Model | Best For | Pricing |
|---|---|---|---|
| Anthropic | Claude 4.5 Sonnet | General use, coding, reasoning | Pay-per-token |
| OpenAI | GPT-4o | Varied tasks, image understanding | Pay-per-token |
| Gemini 2.0 Pro | Long context, multimodal | Pay-per-token | |
| Ollama | Llama 4, Mistral | Privacy, no API costs | Free (your hardware) |
| OpenRouter | Multiple | Access many providers via one API | Pay-per-token |
OpenClaw was originally optimized for Anthropic's Claude models. While other providers work well, you may notice some features work best with Claude.
Anthropic Claude (Recommended)
Claude is the default recommendation for most OpenClaw users. The project was built with Claude in mind, and it shows in the tool calling and instruction-following capabilities.
Getting an API Key
- Go to console.anthropic.com
- Create an account or sign in
- Navigate to API Keys in the left sidebar
- Click Create Key
- Copy your key immediately (it won't be shown again)
Configuration via Onboarding Wizard
The easiest method is running the onboarding wizard:
openclaw onboard
Select Anthropic when prompted and paste your API key.
Manual Configuration
If you need to configure manually:
# Set provider
openclaw config set llm.provider "anthropic"
# Set API key (stored encrypted)
openclaw config set llm.anthropic.apiKey "sk-ant-api03-xxxxxxxxxxxx"
# Set default model
openclaw config set llm.anthropic.model "claude-sonnet-4-5-20250514"
Available Claude Models
| Model | Context Window | Speed | Cost | Notes |
|---|---|---|---|---|
claude-opus-4-5-20251101 | 200K tokens | Slower | $$$$ | Most capable, best reasoning |
claude-sonnet-4-5-20250514 | 200K tokens | Fast | $$ | Best balance |
claude-3-5-haiku-20241022 | 200K tokens | Fastest | $ | Quick queries, lowest cost |
For most users, Claude Sonnet provides the best balance of capability and cost.
Testing Your Connection
openclaw health
Look for:
LLM Provider: anthropic
Status: connected
Model: claude-sonnet-4-5-20250514
Test response: OK (234ms)
OpenAI GPT
Getting an API Key
- Go to platform.openai.com
- Sign in or create an account
- Navigate to API Keys
- Click Create new secret key
- Copy your key
Configuration
openclaw config set llm.provider "openai"
openclaw config set llm.openai.apiKey "sk-xxxxxxxxxxxx"
openclaw config set llm.openai.model "gpt-4o"
Available OpenAI Models
| Model | Context | Speed | Cost |
|---|---|---|---|
gpt-4o | 128K | Fast | $$ |
gpt-4-turbo | 128K | Medium | $$$ |
gpt-4o-mini | 128K | Fastest | $ |
Google Gemini
Getting an API Key
- Go to aistudio.google.com
- Sign in with your Google account
- Click Get API Key
- Create a key for your project
Configuration
openclaw config set llm.provider "google"
openclaw config set llm.google.apiKey "AIzaSyxxxxxxxxxx"
openclaw config set llm.google.model "gemini-2.0-flash"
A Note on Google Integration
Google/Gemini is treated as a "second-class citizen" in OpenClaw according to community feedback. Basic functionality works, but some advanced features may not be as polished as with Anthropic.
Ollama (Local Models)
Run AI models entirely on your own hardware with zero API costs. Perfect for privacy-conscious users or those with capable GPUs.
Prerequisites
Your ClawBook VPS needs sufficient resources:
| Model Size | Required RAM | Recommended GPU |
|---|---|---|
| 7B params | 8 GB | Optional |
| 13B params | 16 GB | Recommended |
| 70B params | 48 GB+ | Required |
ClawBook Standard (4GB) can run small models. Pro (8GB) or Elite (16GB) recommended for serious local use.
Installing Ollama
Ollama comes pre-installed on ClawBook instances. If you need to reinstall:
curl -fsSL https://ollama.ai/install.sh | sh
Downloading Models
# Recommended general-purpose model
ollama pull llama3.1
# Fast, efficient model
ollama pull mistral
# Code-focused model
ollama pull codellama
# List downloaded models
ollama list
Configuration
openclaw config set llm.provider "ollama"
openclaw config set llm.ollama.baseUrl "http://localhost:11434"
openclaw config set llm.ollama.model "llama3.1"
Performance Tips
- Use quantized models (e.g.,
llama3.1:8b-q4) for faster inference on limited hardware - Set appropriate context length to avoid memory issues
- Monitor RAM usage with
htopduring inference
OpenRouter (Multi-Provider Gateway)
OpenRouter provides a single API that routes to multiple providers, useful for:
- Accessing models from different providers with one key
- Automatic fallback if one provider is down
- Usage tracking across providers
Configuration
openclaw config set llm.provider "openrouter"
openclaw config set llm.openrouter.apiKey "sk-or-xxxxxxxxxxxx"
openclaw config set llm.openrouter.model "anthropic/claude-3.5-sonnet"
Advanced Settings
Temperature
Controls randomness in responses:
# Lower = more deterministic (good for factual queries)
openclaw config set llm.temperature 0.3
# Higher = more creative (good for brainstorming)
openclaw config set llm.temperature 0.9
# Default balanced setting
openclaw config set llm.temperature 0.7
Max Tokens
Limit response length:
# Set max output tokens
openclaw config set llm.maxTokens 4096
System Prompt
Customize your assistant's personality:
openclaw config set agents.default.systemPrompt "You are a helpful assistant named Claw. Be concise and friendly. If you're unsure about something, say so."
Multiple Providers & Fallback
You can configure fallback providers for reliability:
# Primary provider
openclaw config set llm.provider "anthropic"
# Fallback provider
openclaw config set llm.fallback.provider "openai"
openclaw config set llm.fallback.triggers '["rate_limit", "timeout", "error"]'
If Anthropic returns a rate limit error or times out, OpenClaw automatically switches to OpenAI.
Routing by Channel
Route different channels to different providers:
# WhatsApp uses Claude (for conversations)
openclaw config set channels.whatsapp.llm.provider "anthropic"
# Discord uses local Llama (for privacy)
openclaw config set channels.discord.llm.provider "ollama"
Cost Management
Setting Limits
Protect your API budget:
# Daily spending limit ($)
openclaw config set llm.limits.dailySpend 10
# Action when limit reached: "block" or "switch_model"
openclaw config set llm.limits.action "switch_model"
openclaw config set llm.limits.fallbackModel "claude-3-5-haiku-20241022"
Monitoring Usage
View your usage:
openclaw stats usage --period 7d
Troubleshooting
"Invalid API Key"
- Verify the key is copied correctly (no extra spaces or quotes)
- Check the key hasn't expired or been revoked
- Ensure billing is set up on your provider account
"Rate Limit Exceeded"
- Wait and retry (usually resets within 1 minute)
- Configure automatic retry in settings
- Consider upgrading your provider plan
"Model Not Found"
- Check the model name is spelled exactly right
- Some models require waitlist access
- Verify the model is available in your region
Slow Responses
- Switch to a faster model (Haiku, GPT-4o-mini)
- Check server network latency:
ping api.anthropic.com - For local models, ensure sufficient RAM/GPU
"Auth store is empty"
Your API key wasn't saved properly. Re-run:
openclaw onboard
Next Steps
- WhatsApp Setup — Connect your first channel
- Security Best Practices — Protect your API keys
- Advanced Settings — Fine-tune behavior