OpenAI Chat Completions
OpenAI Chat Completions (HTTP)
Section titled “OpenAI Chat Completions (HTTP)”OpenClaw’s Gateway can serve a small OpenAI-compatible Chat Completions endpoint.
This endpoint is disabled by default. Enable it in config first.
POST /v1/chat/completions- Same port as the Gateway (WS + HTTP multiplex):
http://<gateway-host>:<port>/v1/chat/completions
When the Gateway’s OpenAI-compatible HTTP surface is enabled, it also serves:
GET /v1/modelsGET /v1/models/{id}POST /v1/embeddingsPOST /v1/responses
Under the hood, requests are executed as a normal Gateway agent run (same codepath as openclaw agent), so routing/permissions/config match your Gateway.
Authentication
Section titled “Authentication”Uses the Gateway auth configuration. Send a bearer token:
Authorization: Bearer <token>
Notes:
- When
gateway.auth.mode="token", usegateway.auth.token(orOPENCLAW_GATEWAY_TOKEN). - When
gateway.auth.mode="password", usegateway.auth.password(orOPENCLAW_GATEWAY_PASSWORD). - If
gateway.auth.rateLimitis configured and too many auth failures occur, the endpoint returns429withRetry-After.
Security boundary (important)
Section titled “Security boundary (important)”Treat this endpoint as a full operator-access surface for the gateway instance.
- HTTP bearer auth here is not a narrow per-user scope model.
- A valid Gateway token/password for this endpoint should be treated like an owner/operator credential.
- Requests run through the same control-plane agent path as trusted operator actions.
- There is no separate non-owner/per-user tool boundary on this endpoint; once a caller passes Gateway auth here, OpenClaw treats that caller as a trusted operator for this gateway.
- For shared-secret auth modes (
tokenandpassword), the endpoint restores the normal full operator defaults even if the caller sends a narrowerx-openclaw-scopesheader. - Trusted identity-bearing HTTP modes (for example trusted proxy auth or
gateway.auth.mode="none") still honor the declared operator scopes on the request. - If the target agent policy allows sensitive tools, this endpoint can use them.
- Keep this endpoint on loopback/tailnet/private ingress only; do not expose it directly to the public internet.
Auth matrix:
gateway.auth.mode="token"or"password"+Authorization: Bearer ...- proves possession of the shared gateway operator secret
- ignores narrower
x-openclaw-scopes - restores the full default operator scope set
- treats chat turns on this endpoint as owner-sender turns
- trusted identity-bearing HTTP modes (for example trusted proxy auth, or
gateway.auth.mode="none"on private ingress)- authenticate some outer trusted identity or deployment boundary
- honor the declared
x-openclaw-scopesheader - only get owner semantics when
operator.adminis actually present in those declared scopes
See Security and Remote access.
Agent-first model contract
Section titled “Agent-first model contract”OpenClaw treats the OpenAI model field as an agent target, not a raw provider model id.
model: "openclaw"routes to the configured default agent.model: "openclaw/default"also routes to the configured default agent.model: "openclaw/<agentId>"routes to a specific agent.
Optional request headers:
x-openclaw-model: <provider/model-or-bare-id>overrides the backend model for the selected agent.x-openclaw-agent-id: <agentId>remains supported as a compatibility override.x-openclaw-session-key: <sessionKey>fully controls session routing.x-openclaw-message-channel: <channel>sets the synthetic ingress channel context for channel-aware prompts and policies.
Compatibility aliases still accepted:
model: "openclaw:<agentId>"model: "agent:<agentId>"
Enabling the endpoint
Section titled “Enabling the endpoint”Set gateway.http.endpoints.chatCompletions.enabled to true:
{ gateway: { http: { endpoints: { chatCompletions: { enabled: true }, }, }, },}Disabling the endpoint
Section titled “Disabling the endpoint”Set gateway.http.endpoints.chatCompletions.enabled to false:
{ gateway: { http: { endpoints: { chatCompletions: { enabled: false }, }, }, },}Session behavior
Section titled “Session behavior”By default the endpoint is stateless per request (a new session key is generated each call).
If the request includes an OpenAI user string, the Gateway derives a stable session key from it, so repeated calls can share an agent session.
Why this surface matters
Section titled “Why this surface matters”This is the highest-leverage compatibility set for self-hosted frontends and tooling:
- Most Open WebUI, LobeChat, and LibreChat setups expect
/v1/models. - Many RAG systems expect
/v1/embeddings. - Existing OpenAI chat clients can usually start with
/v1/chat/completions. - More agent-native clients increasingly prefer
/v1/responses.
Model list and agent routing
Section titled “Model list and agent routing”What does `/v1/models` return?
An OpenClaw agent-target list.
The returned ids are openclaw, openclaw/default, and `openclaw/
entries. Use them directly as OpenAImodel` values.
Does `/v1/models` list agents or sub-agents?
It lists top-level agent targets, not backend provider models and not sub-agents.
Sub-agents remain internal execution topology. They do not appear as pseudo-models.
Why is `openclaw/default` included?
openclaw/default is the stable alias for the configured default agent.
That means clients can keep using one predictable id even if the real default agent id changes between environments.
How do I override the backend model?
Use x-openclaw-model.
Examples:
x-openclaw-model: openai/gpt-5.4
x-openclaw-model: gpt-5.4
If you omit it, the selected agent runs with its normal configured model choice.
How do embeddings fit this contract?
/v1/embeddings uses the same agent-target model ids.
Use model: "openclaw/default" or `model: “openclaw/
“. When you need a specific embedding model, send it in x-openclaw-model`.
Without that header, the request passes through to the selected agent’s normal embedding setup.
Streaming (SSE)
Section titled “Streaming (SSE)”Set stream: true to receive Server-Sent Events (SSE):
Content-Type: text/event-stream- Each event line is
data: <json> - Stream ends with
data: [DONE]
Open WebUI quick setup
Section titled “Open WebUI quick setup”For a basic Open WebUI connection:
- Base URL:
http://127.0.0.1:18789/v1 - Docker on macOS base URL:
http://host.docker.internal:18789/v1 - API key: your Gateway bearer token
- Model:
openclaw/default
Expected behavior:
GET /v1/modelsshould listopenclaw/default- Open WebUI should use
openclaw/defaultas the chat model id - If you want a specific backend provider/model for that agent, set the agent’s normal default model or send
x-openclaw-model
Quick smoke:
curl -sS http://127.0.0.1:18789/v1/models \ -H 'Authorization: Bearer YOUR_TOKEN'If that returns openclaw/default, most Open WebUI setups can connect with the same base URL and token.
Examples
Section titled “Examples”Non-streaming:
curl -sS http://127.0.0.1:18789/v1/chat/completions \ -H 'Authorization: Bearer YOUR_TOKEN' \ -H 'Content-Type: application/json' \ -d '{ "model": "openclaw/default", "messages": [{"role":"user","content":"hi"}] }'Streaming:
curl -N http://127.0.0.1:18789/v1/chat/completions \ -H 'Authorization: Bearer YOUR_TOKEN' \ -H 'Content-Type: application/json' \ -H 'x-openclaw-model: openai/gpt-5.4' \ -d '{ "model": "openclaw/research", "stream": true, "messages": [{"role":"user","content":"hi"}] }'List models:
curl -sS http://127.0.0.1:18789/v1/models \ -H 'Authorization: Bearer YOUR_TOKEN'Fetch one model:
curl -sS http://127.0.0.1:18789/v1/models/openclaw%2Fdefault \ -H 'Authorization: Bearer YOUR_TOKEN'Create embeddings:
curl -sS http://127.0.0.1:18789/v1/embeddings \ -H 'Authorization: Bearer YOUR_TOKEN' \ -H 'Content-Type: application/json' \ -H 'x-openclaw-model: openai/text-embedding-3-small' \ -d '{ "model": "openclaw/default", "input": ["alpha", "beta"] }'Notes:
/v1/modelsreturns OpenClaw agent targets, not raw provider catalogs.openclaw/defaultis always present so one stable id works across environments.- Backend provider/model overrides belong in
x-openclaw-model, not the OpenAImodelfield. /v1/embeddingssupportsinputas a string or array of strings.