Skip to content

Ollama

Ollama is a local LLM runtime that makes it easy to run open-source models on your machine. OpenClaw integrates with Ollama’s native API (/api/chat), supports streaming and tool calling, and can auto-discover local Ollama models when you opt in with OLLAMA_API_KEY (or an auth profile) and do not define an explicit models.providers.ollama entry.

The fastest way to set up Ollama is through onboarding:

Terminal window
openclaw onboard

Select Ollama from the provider list. Onboarding will:

  1. Ask for the Ollama base URL where your instance can be reached (default http://127.0.0.1:11434).
  2. Let you choose Cloud + Local (cloud models and local models) or Local (local models only).
  3. Open a browser sign-in flow if you choose Cloud + Local and are not signed in to ollama.com.
  4. Discover available models and suggest defaults.
  5. Auto-pull the selected model if it is not available locally.

Non-interactive mode is also supported:

Terminal window
openclaw onboard --non-interactive \
--auth-choice ollama \
--accept-risk

Optionally specify a custom base URL or model:

Terminal window
openclaw onboard --non-interactive \
--auth-choice ollama \
--custom-base-url "http://ollama-host:11434" \
--custom-model-id "qwen3.5:27b" \
--accept-risk
  1. Install Ollama: https://ollama.com/download

  2. Pull a local model if you want local inference:

Terminal window
ollama pull glm-4.7-flash
# or
ollama pull gpt-oss:20b
# or
ollama pull llama3.3
  1. If you want cloud models too, sign in:
Terminal window
ollama signin
  1. Run onboarding and choose Ollama:
Terminal window
openclaw onboard
  • Local: local models only
  • Cloud + Local: local models plus cloud models
  • Cloud models such as kimi-k2.5:cloud, minimax-m2.5:cloud, and glm-5:cloud do not require a local ollama pull

OpenClaw currently suggests:

  • local default: glm-4.7-flash
  • cloud defaults: kimi-k2.5:cloud, minimax-m2.5:cloud, glm-5:cloud
  1. If you prefer manual setup, enable Ollama for OpenClaw directly (any value works; Ollama doesn’t require a real key):
Terminal window
# Set environment variable
export OLLAMA_API_KEY="ollama-local"
# Or configure in your config file
openclaw config set models.providers.ollama.apiKey "ollama-local"
  1. Inspect or switch models:
Terminal window
openclaw models list
openclaw models set ollama/glm-4.7-flash
  1. Or set the default in config:
{
agents: {
defaults: {
model: { primary: "ollama/glm-4.7-flash" },
},
},
}

When you set OLLAMA_API_KEY (or an auth profile) and do not define models.providers.ollama, OpenClaw discovers models from the local Ollama instance at http://127.0.0.1:11434:

  • Queries /api/tags
  • Uses best-effort /api/show lookups to read contextWindow when available
  • Marks reasoning with a model-name heuristic (r1, reasoning, think)
  • Sets maxTokens to the default Ollama max-token cap used by OpenClaw
  • Sets all costs to 0

This avoids manual model entries while keeping the catalog aligned with the local Ollama instance.

To see what models are available:

Terminal window
ollama list
openclaw models list

To add a new model, simply pull it with Ollama:

Terminal window
ollama pull mistral

The new model will be automatically discovered and available to use.

If you set models.providers.ollama explicitly, auto-discovery is skipped and you must define models manually (see below).

The simplest way to enable Ollama is via environment variable:

Terminal window
export OLLAMA_API_KEY="ollama-local"

Use explicit config when:

  • Ollama runs on another host/port.
  • You want to force specific context windows or model lists.
  • You want fully manual model definitions.
{
models: {
providers: {
ollama: {
baseUrl: "http://ollama-host:11434",
apiKey: "ollama-local",
api: "ollama",
models: [
{
id: "gpt-oss:20b",
name: "GPT-OSS 20B",
reasoning: false,
input: ["text"],
cost: { input: 0, output: 0, cacheRead: 0, cacheWrite: 0 },
contextWindow: 8192,
maxTokens: 8192 * 10
}
]
}
}
}
}

If OLLAMA_API_KEY is set, you can omit apiKey in the provider entry and OpenClaw will fill it for availability checks.

If Ollama is running on a different host or port (explicit config disables auto-discovery, so define models manually):

{
models: {
providers: {
ollama: {
apiKey: "ollama-local",
baseUrl: "http://ollama-host:11434", // No /v1 - use native Ollama API URL
api: "ollama", // Set explicitly to guarantee native tool-calling behavior
},
},
},
}

Once configured, all your Ollama models are available:

{
agents: {
defaults: {
model: {
primary: "ollama/gpt-oss:20b",
fallbacks: ["ollama/llama3.3", "ollama/qwen2.5-coder:32b"],
},
},
},
}

Cloud models let you run cloud-hosted models (for example kimi-k2.5:cloud, minimax-m2.5:cloud, glm-5:cloud) alongside your local models.

To use cloud models, select Cloud + Local mode during setup. The wizard checks whether you are signed in and opens a browser sign-in flow when needed. If authentication cannot be verified, the wizard falls back to local model defaults.

You can also sign in directly at ollama.com/signin.

OpenClaw treats models with names such as deepseek-r1, reasoning, or think as reasoning-capable by default:

Terminal window
ollama pull deepseek-r1:32b

Ollama is free and runs locally, so all model costs are set to $0.

OpenClaw’s Ollama integration uses the native Ollama API (/api/chat) by default, which fully supports streaming and tool calling simultaneously. No special configuration is needed.

If you need to use the OpenAI-compatible endpoint instead (e.g., behind a proxy that only supports OpenAI format), set api: "openai-completions" explicitly:

{
models: {
providers: {
ollama: {
baseUrl: "http://ollama-host:11434/v1",
api: "openai-completions",
injectNumCtxForOpenAICompat: true, // default: true
apiKey: "ollama-local",
models: [...]
}
}
}
}

This mode may not support streaming + tool calling simultaneously. You may need to disable streaming with params: { streaming: false } in model config.

When api: "openai-completions" is used with Ollama, OpenClaw injects options.num_ctx by default so Ollama does not silently fall back to a 4096 context window. If your proxy/upstream rejects unknown options fields, disable this behavior:

{
models: {
providers: {
ollama: {
baseUrl: "http://ollama-host:11434/v1",
api: "openai-completions",
injectNumCtxForOpenAICompat: false,
apiKey: "ollama-local",
models: [...]
}
}
}
}

For auto-discovered models, OpenClaw uses the context window reported by Ollama when available, otherwise it falls back to the default Ollama context window used by OpenClaw. You can override contextWindow and maxTokens in explicit provider config.

Make sure Ollama is running and that you set OLLAMA_API_KEY (or an auth profile), and that you did not define an explicit models.providers.ollama entry:

Terminal window
ollama serve

And that the API is accessible:

Terminal window
curl http://localhost:11434/api/tags

If your model is not listed, either:

  • Pull the model locally, or
  • Define the model explicitly in models.providers.ollama.

To add models:

Terminal window
ollama list # See what's installed
ollama pull glm-4.7-flash
ollama pull gpt-oss:20b
ollama pull llama3.3 # Or another model

Check that Ollama is running on the correct port:

Terminal window
# Check if Ollama is running
ps aux | grep ollama
# Or restart Ollama
ollama serve