Ollama
Ollama
Section titled “Ollama”Ollama is a local LLM runtime that makes it easy to run open-source models on your machine. OpenClaw integrates with Ollama’s native API (/api/chat), supports streaming and tool calling, and can auto-discover local Ollama models when you opt in with OLLAMA_API_KEY (or an auth profile) and do not define an explicit models.providers.ollama entry.
Quick start
Section titled “Quick start”Onboarding (recommended)
Section titled “Onboarding (recommended)”The fastest way to set up Ollama is through onboarding:
openclaw onboardSelect Ollama from the provider list. Onboarding will:
- Ask for the Ollama base URL where your instance can be reached (default
http://127.0.0.1:11434). - Let you choose Cloud + Local (cloud models and local models) or Local (local models only).
- Open a browser sign-in flow if you choose Cloud + Local and are not signed in to ollama.com.
- Discover available models and suggest defaults.
- Auto-pull the selected model if it is not available locally.
Non-interactive mode is also supported:
openclaw onboard --non-interactive \ --auth-choice ollama \ --accept-riskOptionally specify a custom base URL or model:
openclaw onboard --non-interactive \ --auth-choice ollama \ --custom-base-url "http://ollama-host:11434" \ --custom-model-id "qwen3.5:27b" \ --accept-riskManual setup
Section titled “Manual setup”-
Install Ollama: https://ollama.com/download
-
Pull a local model if you want local inference:
ollama pull glm-4.7-flash# orollama pull gpt-oss:20b# orollama pull llama3.3- If you want cloud models too, sign in:
ollama signin- Run onboarding and choose
Ollama:
openclaw onboardLocal: local models onlyCloud + Local: local models plus cloud models- Cloud models such as
kimi-k2.5:cloud,minimax-m2.5:cloud, andglm-5:clouddo not require a localollama pull
OpenClaw currently suggests:
- local default:
glm-4.7-flash - cloud defaults:
kimi-k2.5:cloud,minimax-m2.5:cloud,glm-5:cloud
- If you prefer manual setup, enable Ollama for OpenClaw directly (any value works; Ollama doesn’t require a real key):
# Set environment variableexport OLLAMA_API_KEY="ollama-local"
# Or configure in your config fileopenclaw config set models.providers.ollama.apiKey "ollama-local"- Inspect or switch models:
openclaw models listopenclaw models set ollama/glm-4.7-flash- Or set the default in config:
{ agents: { defaults: { model: { primary: "ollama/glm-4.7-flash" }, }, },}Model discovery (implicit provider)
Section titled “Model discovery (implicit provider)”When you set OLLAMA_API_KEY (or an auth profile) and do not define models.providers.ollama, OpenClaw discovers models from the local Ollama instance at http://127.0.0.1:11434:
- Queries
/api/tags - Uses best-effort
/api/showlookups to readcontextWindowwhen available - Marks
reasoningwith a model-name heuristic (r1,reasoning,think) - Sets
maxTokensto the default Ollama max-token cap used by OpenClaw - Sets all costs to
0
This avoids manual model entries while keeping the catalog aligned with the local Ollama instance.
To see what models are available:
ollama listopenclaw models listTo add a new model, simply pull it with Ollama:
ollama pull mistralThe new model will be automatically discovered and available to use.
If you set models.providers.ollama explicitly, auto-discovery is skipped and you must define models manually (see below).
Configuration
Section titled “Configuration”Basic setup (implicit discovery)
Section titled “Basic setup (implicit discovery)”The simplest way to enable Ollama is via environment variable:
export OLLAMA_API_KEY="ollama-local"Explicit setup (manual models)
Section titled “Explicit setup (manual models)”Use explicit config when:
- Ollama runs on another host/port.
- You want to force specific context windows or model lists.
- You want fully manual model definitions.
{ models: { providers: { ollama: { baseUrl: "http://ollama-host:11434", apiKey: "ollama-local", api: "ollama", models: [ { id: "gpt-oss:20b", name: "GPT-OSS 20B", reasoning: false, input: ["text"], cost: { input: 0, output: 0, cacheRead: 0, cacheWrite: 0 }, contextWindow: 8192, maxTokens: 8192 * 10 } ] } } }}If OLLAMA_API_KEY is set, you can omit apiKey in the provider entry and OpenClaw will fill it for availability checks.
Custom base URL (explicit config)
Section titled “Custom base URL (explicit config)”If Ollama is running on a different host or port (explicit config disables auto-discovery, so define models manually):
{ models: { providers: { ollama: { apiKey: "ollama-local", baseUrl: "http://ollama-host:11434", // No /v1 - use native Ollama API URL api: "ollama", // Set explicitly to guarantee native tool-calling behavior }, }, },}Model selection
Section titled “Model selection”Once configured, all your Ollama models are available:
{ agents: { defaults: { model: { primary: "ollama/gpt-oss:20b", fallbacks: ["ollama/llama3.3", "ollama/qwen2.5-coder:32b"], }, }, },}Cloud models
Section titled “Cloud models”Cloud models let you run cloud-hosted models (for example kimi-k2.5:cloud, minimax-m2.5:cloud, glm-5:cloud) alongside your local models.
To use cloud models, select Cloud + Local mode during setup. The wizard checks whether you are signed in and opens a browser sign-in flow when needed. If authentication cannot be verified, the wizard falls back to local model defaults.
You can also sign in directly at ollama.com/signin.
Advanced
Section titled “Advanced”Reasoning models
Section titled “Reasoning models”OpenClaw treats models with names such as deepseek-r1, reasoning, or think as reasoning-capable by default:
ollama pull deepseek-r1:32bModel Costs
Section titled “Model Costs”Ollama is free and runs locally, so all model costs are set to $0.
Streaming Configuration
Section titled “Streaming Configuration”OpenClaw’s Ollama integration uses the native Ollama API (/api/chat) by default, which fully supports streaming and tool calling simultaneously. No special configuration is needed.
Legacy OpenAI-Compatible Mode
Section titled “Legacy OpenAI-Compatible Mode”If you need to use the OpenAI-compatible endpoint instead (e.g., behind a proxy that only supports OpenAI format), set api: "openai-completions" explicitly:
{ models: { providers: { ollama: { baseUrl: "http://ollama-host:11434/v1", api: "openai-completions", injectNumCtxForOpenAICompat: true, // default: true apiKey: "ollama-local", models: [...] } } }}This mode may not support streaming + tool calling simultaneously. You may need to disable streaming with params: { streaming: false } in model config.
When api: "openai-completions" is used with Ollama, OpenClaw injects options.num_ctx by default so Ollama does not silently fall back to a 4096 context window. If your proxy/upstream rejects unknown options fields, disable this behavior:
{ models: { providers: { ollama: { baseUrl: "http://ollama-host:11434/v1", api: "openai-completions", injectNumCtxForOpenAICompat: false, apiKey: "ollama-local", models: [...] } } }}Context windows
Section titled “Context windows”For auto-discovered models, OpenClaw uses the context window reported by Ollama when available, otherwise it falls back to the default Ollama context window used by OpenClaw. You can override contextWindow and maxTokens in explicit provider config.
Troubleshooting
Section titled “Troubleshooting”Ollama not detected
Section titled “Ollama not detected”Make sure Ollama is running and that you set OLLAMA_API_KEY (or an auth profile), and that you did not define an explicit models.providers.ollama entry:
ollama serveAnd that the API is accessible:
curl http://localhost:11434/api/tagsNo models available
Section titled “No models available”If your model is not listed, either:
- Pull the model locally, or
- Define the model explicitly in
models.providers.ollama.
To add models:
ollama list # See what's installedollama pull glm-4.7-flashollama pull gpt-oss:20bollama pull llama3.3 # Or another modelConnection refused
Section titled “Connection refused”Check that Ollama is running on the correct port:
# Check if Ollama is runningps aux | grep ollama
# Or restart Ollamaollama serveSee Also
Section titled “See Also”- Model Providers - Overview of all providers
- Model Selection - How to choose models
- Configuration - Full config reference