onboard
openclaw onboard
Section titled “openclaw onboard”Interactive onboarding for local or remote Gateway setup.
Related guides
Section titled “Related guides”- CLI onboarding hub: Onboarding (CLI)
- Onboarding overview: Onboarding Overview
- CLI onboarding reference: CLI Setup Reference
- CLI automation: CLI Automation
- macOS onboarding: Onboarding (macOS App)
Examples
Section titled “Examples”openclaw onboardopenclaw onboard --flow quickstartopenclaw onboard --flow manualopenclaw onboard --mode remote --remote-url wss://gateway-host:18789For plaintext private-network ws:// targets (trusted networks only), set
OPENCLAW_ALLOW_INSECURE_PRIVATE_WS=1 in the onboarding process environment.
Non-interactive custom provider:
openclaw onboard --non-interactive \ --auth-choice custom-api-key \ --custom-base-url "https://llm.example.com/v1" \ --custom-model-id "foo-large" \ --custom-api-key "$CUSTOM_API_KEY" \ --secret-input-mode plaintext \ --custom-compatibility openai--custom-api-key is optional in non-interactive mode. If omitted, onboarding checks CUSTOM_API_KEY.
Non-interactive Ollama:
openclaw onboard --non-interactive \ --auth-choice ollama \ --custom-base-url "http://ollama-host:11434" \ --custom-model-id "qwen3.5:27b" \ --accept-risk--custom-base-url defaults to http://127.0.0.1:11434. --custom-model-id is optional; if omitted, onboarding uses Ollama’s suggested defaults. Cloud model IDs such as kimi-k2.5:cloud also work here.
Store provider keys as refs instead of plaintext:
openclaw onboard --non-interactive \ --auth-choice openai-api-key \ --secret-input-mode ref \ --accept-riskWith --secret-input-mode ref, onboarding writes env-backed refs instead of plaintext key values.
For auth-profile backed providers this writes keyRef entries; for custom providers this writes models.providers.<id>.apiKey as an env ref (for example { source: "env", provider: "default", id: "CUSTOM_API_KEY" }).
Non-interactive ref mode contract:
- Set the provider env var in the onboarding process environment (for example
OPENAI_API_KEY). - Do not pass inline key flags (for example
--openai-api-key) unless that env var is also set. - If an inline key flag is passed without the required env var, onboarding fails fast with guidance.
Gateway token options in non-interactive mode:
--gateway-auth token --gateway-token <token>stores a plaintext token.--gateway-auth token --gateway-token-ref-env <name>storesgateway.auth.tokenas an env SecretRef.--gateway-tokenand--gateway-token-ref-envare mutually exclusive.--gateway-token-ref-envrequires a non-empty env var in the onboarding process environment.- With
--install-daemon, when token auth requires a token, SecretRef-managed gateway tokens are validated but not persisted as resolved plaintext in supervisor service environment metadata. - With
--install-daemon, if token mode requires a token and the configured token SecretRef is unresolved, onboarding fails closed with remediation guidance. - With
--install-daemon, if bothgateway.auth.tokenandgateway.auth.passwordare configured andgateway.auth.modeis unset, onboarding blocks install until mode is set explicitly.
Example:
export OPENCLAW_GATEWAY_TOKEN="your-token"openclaw onboard --non-interactive \ --mode local \ --auth-choice skip \ --gateway-auth token \ --gateway-token-ref-env OPENCLAW_GATEWAY_TOKEN \ --accept-riskNon-interactive local gateway health:
- Unless you pass
--skip-health, onboarding waits for a reachable local gateway before it exits successfully. --install-daemonstarts the managed gateway install path first. Without it, you must already have a local gateway running, for exampleopenclaw gateway run.- If you only want config/workspace/bootstrap writes in automation, use
--skip-health. - On native Windows,
--install-daemontries Scheduled Tasks first and falls back to a per-user Startup-folder login item if task creation is denied.
Interactive onboarding behavior with reference mode:
- Choose Use secret reference when prompted.
- Then choose either:
- Environment variable
- Configured secret provider (
fileorexec)
- Onboarding performs a fast preflight validation before saving the ref.
- If validation fails, onboarding shows the error and lets you retry.
Non-interactive Z.AI endpoint choices:
Note: --auth-choice zai-api-key now auto-detects the best Z.AI endpoint for your key (prefers the general API with zai/glm-5).
If you specifically want the GLM Coding Plan endpoints, pick zai-coding-global or zai-coding-cn.
# Promptless endpoint selectionopenclaw onboard --non-interactive \ --auth-choice zai-coding-global \ --zai-api-key "$ZAI_API_KEY"
# Other Z.AI endpoint choices:# --auth-choice zai-coding-cn# --auth-choice zai-global# --auth-choice zai-cnNon-interactive Mistral example:
openclaw onboard --non-interactive \ --auth-choice mistral-api-key \ --mistral-api-key "$MISTRAL_API_KEY"Flow notes:
quickstart: minimal prompts, auto-generates a gateway token.manual: full prompts for port/bind/auth (alias ofadvanced).- In the web-search step, choosing Grok can trigger a separate follow-up
prompt to enable
x_searchwith the sameXAI_API_KEYand optionally pick anx_searchmodel. Other web-search providers do not show that prompt. - Local onboarding DM scope behavior: CLI Setup Reference.
- Fastest first chat:
openclaw dashboard(Control UI, no channel setup). - Custom Provider: connect any OpenAI or Anthropic compatible endpoint, including hosted providers not listed. Use Unknown to auto-detect.
Common follow-up commands
Section titled “Common follow-up commands”openclaw configureopenclaw agents add <name>