Clawdbot/OpenClaw + Ollama as your personal assistant

Original on Medium

Getting Started with Clawdbot: A Complete Onboarding Guide with Ollama

Clawdbot is an open-source AI assistant that runs on your infrastructure. Unlike cloud-only assistants, it keeps your data local, works across multiple messaging platforms (WhatsApp, Telegram, Discord, Signal), and gives you full control over which AI models power your conversations.

This guide walks you through setting up Clawdbot for the first time and configuring it to use Ollama as your local model provider — keeping everything on your machine while maintaining access to powerful AI capabilities.

What You’ll Need

•⁠ ⁠macOS, Linux, or Windows (WSL recommended for Windows)

•⁠ ⁠Node.js 20+ and npm

•⁠ ⁠Ollama installed locally

•⁠ ⁠A reasonably powerful machine (16GB+ RAM recommended; GPU optional but helpful)

Step 1: Install Clawdbot

The quickest way to get started is via npm:

npm install -g clawdbot

Or use the installer script:

curl -fsSL https://clawd.bot/install | bash

Verify the installation:

clawdbot --version

Step 2: Run the Onboarding Wizard

Clawdbot includes an interactive wizard that sets up your configuration and agent workspace:

clawdbot onboard

You’ll be guided through:

•⁠ ⁠Gateway setup: The daemon that connects to messaging platforms •⁠ ⁠Authentication: Generate or provide a gateway token •⁠ ⁠Workspace creation: Where your agent’s files, memory, and configuration live For a minimal setup, use the quickstart flow:

clawdbot onboard --flow quickstart

This auto-generates everything needed to start chatting immediately.

Step 3: Install and Configure Ollama

Before connecting Clawdbot, ensure Ollama is running with a suitable model:

Install Ollama (macOS/Linux)

curl -fsSL https://ollama.com/install.sh | sh

Pull a capable model (Mistral, Llama 3.1, or similar)

ollama pull mistral:latest

Start the Ollama server (if not already running)

ollama serve

By default, Ollama runs on ⁠ http://127.0.0.1:11434 ⁠ and exposes an OpenAI-compatible API.

Step 4: Configure Clawdbot to Use Ollama

Edit your Clawdbot configuration file (located at ⁠ ~/.clawdbot/moltbot.json ⁠ or your workspace):

{
"agents": {
"defaults": {
"model": {
"primary": "ollama/mistral:latest"
},
"models": {
"ollama/mistral:latest": {
"alias": "Mistral Local"
}
}
}
},
"models": {
"mode": "merge",
"providers": {
"ollama": {
"baseUrl": "http://127.0.0.1:11434/v1",
"apiKey": "ollama",
"api": "openai-responses",
"models": [
{
"id": "mistral:latest",
"name": "Mistral Local",
"reasoning": false,
"input": ["text"],
"cost": { "input": 0, "output": 0, "cacheRead": 0, "cacheWrite": 0 },
"contextWindow": 32000,
"maxTokens": 4096
}
]
}
}
}
}

Key configuration points:

•⁠ ⁠⁠ baseUrl ⁠: Ollama’s OpenAI-compatible endpoint (default: ⁠ http://127.0.0.1:11434/v1 ⁠)

•⁠ ⁠⁠ api ⁠: Use ⁠ “openai-responses” ⁠ for cleaner output handling

•⁠ ⁠⁠ contextWindow ⁠: Set based on your model’s actual limits

•⁠ ⁠⁠ mode: “merge” ⁠: Allows fallback to cloud providers if Ollama becomes unavailable

Besides, Ollama also provides a specific command to set it up:

  • ollama launch clawdbot — Configures Clawdbot to use Ollama AND starts the gateway in one step
  • ⁠ollama launch clawdbot --config — Just configures without launching

Step 5: Connect a Messaging Channel

WhatsApp (Recommended for Mobile Access)

clawdbot gateway start --channel whatsapp

A QR code will appear. Scan it with WhatsApp on your phone (Linked Devices → Link a Device). Your personal WhatsApp account now messages with your local AI.

Telegram Bot

Create a bot via @BotFather, then:

clawdbot gateway start --channel telegram --token YOUR_BOT_TOKEN

Discord

Create a bot in the Discord Developer Portal, enable Message Content Intent, and:

clawdbot gateway start --channel discord --token YOUR_BOT_TOKEN

Step 6: Verify Everything Works

1.⁠ ⁠Check gateway status:

⁠ clawdbot gateway status

⁠ 2.⁠ ⁠Test model connectivity:

⁠ curl http://127.0.0.1:11434/v1/models

⁠ 3.⁠ ⁠Send a test message via your connected channel. You should see responses powered by your local Ollama model.

Pro Tips for Local Model Usage

Hybrid Setup: Local Primary, Cloud Fallback

Keep cloud models as backups for when Ollama is offline or overwhelmed:

{
"agents": {
"defaults": {
"model": {
"primary": "ollama/mistral:latest",
"fallbacks": ["anthropic/claude-sonnet-4", "openai/gpt-4o-mini"]
}
}
}
}

Performance Optimization

•⁠ ⁠Keep models loaded: Ollama unloads models after a timeout. For faster responses, set ⁠ OLLAMA_KEEP_ALIVE=24h ⁠ when running ⁠ ollama serve⁠.

•⁠ ⁠Context management: Lower ⁠ contextWindow ⁠ if you experience slowdowns. Start with 8K-16K tokens.

•⁠ ⁠Quantization: Use ⁠ q4_K_M ⁠ or ⁠ q5_K_M ⁠ quantized models for good quality with lower memory usage.

Security Considerations

Local models bypass cloud safety filters. To mitigate risks:

•⁠ ⁠Keep agent capabilities narrow (limit tool access via ⁠capabilities ⁠config)

•⁠ ⁠Enable session compaction to prevent context window attacks

•⁠ ⁠Review the agent’s ⁠ SOUL.md ⁠ to define appropriate boundaries

Troubleshooting

“Connection refused” to Ollama •⁠ ⁠Solution: Verify ⁠ ollama serve ⁠ is running and listening on the correct port

Gateway won’t start •⁠ ⁠Solution: Check ⁠ clawdbot doctor ⁠ for diagnostic output

Slow responses •⁠ ⁠Solution: Use a smaller model, enable GPU acceleration, or reduce ⁠ contextWindow ⁠

Model not found •⁠ ⁠Solution: Ensure you’ve run ⁠ ollama pull modelname ⁠ and the model ID matches in config

No messages received •⁠ ⁠Solution: Verify channel token/QR code and check ⁠ clawdbot gateway logs ⁠

Next Steps

•⁠ ⁠Customize your agent: Edit files in your workspace (⁠ ~/clawd ⁠ by default) to shape personality and capabilities

•⁠ ⁠Add skills: Run ⁠ clawdbot skills ⁠ to browse installable capabilities like weather, web search, or home automation

•⁠ ⁠Explore the dashboard: Run ⁠ clawdbot dashboard ⁠ for a web-based control interface

•⁠ ⁠Set up cron jobs: Use ⁠ clawdbot cron ⁠ for scheduled tasks and proactive notifications

Resources:

•⁠ ⁠Clawdbot Documentation

•⁠ ⁠Ollama Model Library

•⁠ ⁠Ollama Clawdbot Integration

•⁠ ⁠GitHub: clawdbot/clawdbot

Enjoy your fully private, locally-powered AI assistant! 🤖

Comments