Skip to main content

LLM Providers

OpenClaw is model-agnostic—it routes messages to whichever AI you configure. This guide covers setting up different LLM providers with your ClawBook instance.

Provider Overview

ProviderRecommended ModelBest ForPricing
AnthropicClaude 4.5 SonnetGeneral use, coding, reasoningPay-per-token
OpenAIGPT-4oVaried tasks, image understandingPay-per-token
GoogleGemini 2.0 ProLong context, multimodalPay-per-token
OllamaLlama 4, MistralPrivacy, no API costsFree (your hardware)
OpenRouterMultipleAccess many providers via one APIPay-per-token

OpenClaw was originally optimized for Anthropic's Claude models. While other providers work well, you may notice some features work best with Claude.

Claude is the default recommendation for most OpenClaw users. The project was built with Claude in mind, and it shows in the tool calling and instruction-following capabilities.

Getting an API Key

  1. Go to console.anthropic.com
  2. Create an account or sign in
  3. Navigate to API Keys in the left sidebar
  4. Click Create Key
  5. Copy your key immediately (it won't be shown again)

Configuration via Onboarding Wizard

The easiest method is running the onboarding wizard:

openclaw onboard

Select Anthropic when prompted and paste your API key.

Manual Configuration

If you need to configure manually:

# Set provider
openclaw config set llm.provider "anthropic"

# Set API key (stored encrypted)
openclaw config set llm.anthropic.apiKey "sk-ant-api03-xxxxxxxxxxxx"

# Set default model
openclaw config set llm.anthropic.model "claude-sonnet-4-5-20250514"

Available Claude Models

ModelContext WindowSpeedCostNotes
claude-opus-4-5-20251101200K tokensSlower$$$$Most capable, best reasoning
claude-sonnet-4-5-20250514200K tokensFast$$Best balance
claude-3-5-haiku-20241022200K tokensFastest$Quick queries, lowest cost

For most users, Claude Sonnet provides the best balance of capability and cost.

Testing Your Connection

openclaw health

Look for:

LLM Provider: anthropic
Status: connected
Model: claude-sonnet-4-5-20250514
Test response: OK (234ms)

OpenAI GPT

Getting an API Key

  1. Go to platform.openai.com
  2. Sign in or create an account
  3. Navigate to API Keys
  4. Click Create new secret key
  5. Copy your key

Configuration

openclaw config set llm.provider "openai"
openclaw config set llm.openai.apiKey "sk-xxxxxxxxxxxx"
openclaw config set llm.openai.model "gpt-4o"

Available OpenAI Models

ModelContextSpeedCost
gpt-4o128KFast$$
gpt-4-turbo128KMedium$$$
gpt-4o-mini128KFastest$

Google Gemini

Getting an API Key

  1. Go to aistudio.google.com
  2. Sign in with your Google account
  3. Click Get API Key
  4. Create a key for your project

Configuration

openclaw config set llm.provider "google"
openclaw config set llm.google.apiKey "AIzaSyxxxxxxxxxx"
openclaw config set llm.google.model "gemini-2.0-flash"

A Note on Google Integration

Google/Gemini is treated as a "second-class citizen" in OpenClaw according to community feedback. Basic functionality works, but some advanced features may not be as polished as with Anthropic.

Ollama (Local Models)

Run AI models entirely on your own hardware with zero API costs. Perfect for privacy-conscious users or those with capable GPUs.

Prerequisites

Your ClawBook VPS needs sufficient resources:

Model SizeRequired RAMRecommended GPU
7B params8 GBOptional
13B params16 GBRecommended
70B params48 GB+Required

ClawBook Standard (4GB) can run small models. Pro (8GB) or Elite (16GB) recommended for serious local use.

Installing Ollama

Ollama comes pre-installed on ClawBook instances. If you need to reinstall:

curl -fsSL https://ollama.ai/install.sh | sh

Downloading Models

# Recommended general-purpose model
ollama pull llama3.1

# Fast, efficient model
ollama pull mistral

# Code-focused model
ollama pull codellama

# List downloaded models
ollama list

Configuration

openclaw config set llm.provider "ollama"
openclaw config set llm.ollama.baseUrl "http://localhost:11434"
openclaw config set llm.ollama.model "llama3.1"

Performance Tips

  • Use quantized models (e.g., llama3.1:8b-q4) for faster inference on limited hardware
  • Set appropriate context length to avoid memory issues
  • Monitor RAM usage with htop during inference

OpenRouter (Multi-Provider Gateway)

OpenRouter provides a single API that routes to multiple providers, useful for:

  • Accessing models from different providers with one key
  • Automatic fallback if one provider is down
  • Usage tracking across providers

Configuration

openclaw config set llm.provider "openrouter"
openclaw config set llm.openrouter.apiKey "sk-or-xxxxxxxxxxxx"
openclaw config set llm.openrouter.model "anthropic/claude-3.5-sonnet"

Advanced Settings

Temperature

Controls randomness in responses:

# Lower = more deterministic (good for factual queries)
openclaw config set llm.temperature 0.3

# Higher = more creative (good for brainstorming)
openclaw config set llm.temperature 0.9

# Default balanced setting
openclaw config set llm.temperature 0.7

Max Tokens

Limit response length:

# Set max output tokens
openclaw config set llm.maxTokens 4096

System Prompt

Customize your assistant's personality:

openclaw config set agents.default.systemPrompt "You are a helpful assistant named Claw. Be concise and friendly. If you're unsure about something, say so."

Multiple Providers & Fallback

You can configure fallback providers for reliability:

# Primary provider
openclaw config set llm.provider "anthropic"

# Fallback provider
openclaw config set llm.fallback.provider "openai"
openclaw config set llm.fallback.triggers '["rate_limit", "timeout", "error"]'

If Anthropic returns a rate limit error or times out, OpenClaw automatically switches to OpenAI.

Routing by Channel

Route different channels to different providers:

# WhatsApp uses Claude (for conversations)
openclaw config set channels.whatsapp.llm.provider "anthropic"

# Discord uses local Llama (for privacy)
openclaw config set channels.discord.llm.provider "ollama"

Cost Management

Setting Limits

Protect your API budget:

# Daily spending limit ($)
openclaw config set llm.limits.dailySpend 10

# Action when limit reached: "block" or "switch_model"
openclaw config set llm.limits.action "switch_model"
openclaw config set llm.limits.fallbackModel "claude-3-5-haiku-20241022"

Monitoring Usage

View your usage:

openclaw stats usage --period 7d

Troubleshooting

"Invalid API Key"

  • Verify the key is copied correctly (no extra spaces or quotes)
  • Check the key hasn't expired or been revoked
  • Ensure billing is set up on your provider account

"Rate Limit Exceeded"

  • Wait and retry (usually resets within 1 minute)
  • Configure automatic retry in settings
  • Consider upgrading your provider plan

"Model Not Found"

  • Check the model name is spelled exactly right
  • Some models require waitlist access
  • Verify the model is available in your region

Slow Responses

  1. Switch to a faster model (Haiku, GPT-4o-mini)
  2. Check server network latency: ping api.anthropic.com
  3. For local models, ensure sufficient RAM/GPU

"Auth store is empty"

Your API key wasn't saved properly. Re-run:

openclaw onboard

Next Steps