Groq
Groq provides ultra-fast inference on open-source models (Llama, Gemma, Mistral, and more) using custom LPU hardware. OpenClaw connects to Groq through its OpenAI-compatible API.
- Provider:
groq - Auth:
GROQ_API_KEY - API: OpenAI-compatible
Quick start
Section titled “Quick start”-
Get an API key from console.groq.com/keys.
-
Set the API key:
export GROQ_API_KEY="gsk_..."- Set a default model:
{ agents: { defaults: { model: { primary: "groq/llama-3.3-70b-versatile" }, }, },}Config file example
Section titled “Config file example”{ env: { GROQ_API_KEY: "gsk_..." }, agents: { defaults: { model: { primary: "groq/llama-3.3-70b-versatile" }, }, },}Audio transcription
Section titled “Audio transcription”Groq also provides fast Whisper-based audio transcription. When configured as a
media-understanding provider, OpenClaw uses Groq’s whisper-large-v3-turbo
model to transcribe voice messages.
{ media: { understanding: { audio: { models: [{ provider: "groq" }], }, }, },}Environment note
Section titled “Environment note”If the Gateway runs as a daemon (launchd/systemd), make sure GROQ_API_KEY is
available to that process (for example, in ~/.openclaw/.env or via
env.shellEnv).
Available models
Section titled “Available models”Groq’s model catalog changes frequently. Run openclaw models list | grep groq
to see currently available models, or check
console.groq.com/docs/models.
Popular choices include:
- Llama 3.3 70B Versatile - general-purpose, large context
- Llama 3.1 8B Instant - fast, lightweight
- Gemma 2 9B - compact, efficient
- Mixtral 8x7B - MoE architecture, strong reasoning