Skip to main content

Performance Troubleshooting

This guide helps identify and resolve performance issues with your ClawBook VPS.

Performance Benchmarks

MetricGoodAcceptablePoor
AI Response Time< 2s2-5s> 5s
Dashboard Load< 1s1-3s> 3s
Message Delivery< 500ms500ms-2s> 2s
CPU Usage< 50%50-80%> 80%
Memory Usage< 70%70-90%> 90%

Quick Performance Check

clawbook-performance

# Output:
# ClawBook Performance Report
# ===========================
# AI Response: 1.2s avg (Good)
# Dashboard Load: 0.8s (Good)
# CPU: 35% (Good)
# Memory: 2.1GB/4GB - 52% (Good)
# Disk I/O: 12 MB/s (Good)
# Network: 45ms latency to API (Good)
#
# Overall: HEALTHY

Slow AI Responses

Diagnosis

# Check response time history
clawbook-stats performance --period 24h

# Watch real-time
tail -f /var/log/openclaw/app.log | grep "response_time"

Causes & Solutions

1. Model Selection

Faster models = faster responses:

ModelSpeedQuality
Claude 3 HaikuFastestGood
Claude 3.5 SonnetFastExcellent
Claude 3 OpusSlowBest
GPT-4oFastExcellent
GPT-4SlowExcellent

Switch model in SettingsAI Providers.

2. Large Context Windows

Reduce context size:

# Settings → AI → Context
max_history_messages: 10 # Reduce from 20
max_context_tokens: 2000 # Reduce from 4000

3. Network Latency

Check latency to AI provider:

ping -c 10 api.anthropic.com

If > 100ms, consider:

  • Moving VPS closer to API servers (US-East for Anthropic/OpenAI)
  • Using a different provider

4. Rate Limiting

If hitting rate limits:

  • Wait for limit reset
  • Reduce message frequency
  • Upgrade provider plan

High CPU Usage

Diagnosis

# Real-time CPU usage
top -c

# Process-specific
ps aux --sort=-%cpu | head -10

# Historical (if installed)
sar -u 1 10

Causes & Solutions

1. High Message Volume

clawbook-stats messages --period 1h

If consistently high:

  • Enable rate limiting per user
  • Upgrade to higher plan
  • Optimize prompts (shorter = less processing)

2. Background Processes

# Find CPU hogs
ps aux --sort=-%cpu | head -5

Kill unnecessary processes or identify issues.

3. Insufficient Resources

If CPU consistently > 80%:


High Memory Usage

Diagnosis

# Current usage
free -h

# Process breakdown
ps aux --sort=-%mem | head -10

# Detailed memory map
cat /proc/meminfo

Causes & Solutions

1. Large Conversation Contexts

Memory grows with context:

# Clear old conversations
clawbook-cleanup conversations --older-than 7d

# Reduce context settings
# Settings → AI → max_context_tokens: 2000

2. Memory Leaks

# Check if memory grows over time without release
watch -n 60 free -h

Fix: Restart services periodically:

# Add to crontab
0 4 * * * systemctl restart openclaw

3. PostgreSQL Buffers

Tune PostgreSQL for your RAM:

# /etc/postgresql/*/main/postgresql.conf
shared_buffers = 1GB # 25% of RAM
effective_cache_size = 2GB # 50% of RAM
sudo systemctl restart postgresql

Slow Dashboard

Diagnosis

# Check Caddy response times
tail -f /var/log/caddy/access.log

# Check backend response
curl -w "Time: %{time_total}\n" -o /dev/null -s https://localhost:8443

Solutions

1. Enable Caching

# /etc/openclaw/config.yaml
cache:
enabled: true
ttl: 300 # 5 minutes
static_files: true

2. Optimize Database Queries

# Check slow queries
tail -f /var/log/postgresql/postgresql-*-main.log | grep -i slow

Run maintenance:

sudo -u postgres vacuumdb --all --analyze

3. CDN for Static Assets

Configure Cloudflare or similar CDN for static files.


Disk I/O Issues

Diagnosis

# Real-time I/O
iostat -x 1 5

# Check disk usage
df -h

# Find large files
du -sh /* | sort -rh | head -10

Solutions

1. Clean Up Disk

# Remove old logs
sudo journalctl --vacuum-time=7d

# Clean apt cache
sudo apt autoremove -y
sudo apt clean

# ClawBook cleanup
clawbook-cleanup all --older-than 30d

2. Move Logs to Separate Disk

If using multiple disks, move logs:

# Mount new disk at /var/log
# Configure log rotation

3. Upgrade Storage

If consistently running out of space:


Network Performance

Diagnosis

# Check bandwidth
speedtest-cli

# Check latency
ping -c 10 api.anthropic.com

# Check for packet loss
mtr -rw api.anthropic.com

Solutions

1. Optimize Location

Move VPS closer to:

  • Your users (for dashboard)
  • AI providers (for API calls)

Most AI APIs are US-based, so US-East is often optimal.

2. Enable Keep-Alive

# /etc/openclaw/config.yaml
network:
http_keep_alive: true
connection_timeout: 30

3. Use HTTP/2

Caddy enables HTTP/2 by default. Verify:

curl -I --http2 https://yourdomain.com

Database Optimization

Regular Maintenance

# Vacuum and analyze
sudo -u postgres vacuumdb --all --analyze

# Reindex
sudo -u postgres reindexdb --all

Check for Issues

# Table sizes
sudo -u postgres psql -c "SELECT relname, pg_size_pretty(pg_total_relation_size(relid)) FROM pg_catalog.pg_statio_user_tables ORDER BY pg_total_relation_size(relid) DESC LIMIT 10;"

# Index usage
sudo -u postgres psql -c "SELECT indexrelname, idx_scan FROM pg_stat_user_indexes ORDER BY idx_scan DESC LIMIT 10;"

Tuning

Adjust based on your plan:

SettingStandard (4GB)Pro (8GB)Elite (16GB)
shared_buffers1GB2GB4GB
effective_cache_size2GB5GB12GB
work_mem64MB128MB256MB

Optimization Checklist

Quick Wins

  • Use faster AI model (Sonnet/Haiku vs Opus)
  • Reduce context window size
  • Enable caching
  • Clean up old data
  • Restart services weekly

Medium Effort

  • Tune PostgreSQL settings
  • Optimize log rotation
  • Set up monitoring alerts
  • Configure rate limiting

Major Changes

  • Upgrade plan for more resources
  • Move to optimal datacenter location
  • Implement CDN
  • Horizontal scaling (Enterprise)

Monitoring Performance

Set Up Alerts

Configure alerts for performance degradation:

# /etc/openclaw/config.yaml
alerts:
performance:
ai_response_threshold_ms: 5000
cpu_threshold_percent: 85
memory_threshold_percent: 90
disk_threshold_percent: 85

Regular Reviews

Weekly:

  • Check clawbook-performance
  • Review resource trends
  • Clean up if needed

Monthly:

  • Full performance audit
  • Database maintenance
  • Capacity planning

Need More Help?

If performance issues persist:

  1. Gather diagnostics

    clawbook-diagnostics performance > ~/perf-report.txt
  2. Contact support