Skip to content

Video Generation

OpenClaw agents can generate videos from text prompts, reference images, or existing videos. Twelve provider backends are supported, each with different model options, input modes, and feature sets. The agent picks the right provider automatically based on your configuration and available API keys.

OpenClaw treats video generation as three runtime modes:

  • generate for text-to-video requests with no reference media
  • imageToVideo when the request includes one or more reference images
  • videoToVideo when the request includes one or more reference videos

Providers can support any subset of those modes. The tool validates the active mode before submission and reports supported modes in action=list.

  1. Set an API key for any supported provider:
Terminal window
export GEMINI_API_KEY="your-key"
  1. Optionally pin a default model:
Terminal window
openclaw config set agents.defaults.videoGenerationModel.primary "google/veo-3.1-fast-generate-preview"
  1. Ask the agent:

Generate a 5-second cinematic video of a friendly lobster surfing at sunset.

The agent calls video_generate automatically. No tool allowlisting is needed.

Video generation is asynchronous. When the agent calls video_generate in a session:

  1. OpenClaw submits the request to the provider and immediately returns a task ID.
  2. The provider processes the job in the background (typically 30 seconds to 5 minutes depending on the provider and resolution).
  3. When the video is ready, OpenClaw wakes the same session with an internal completion event.
  4. The agent posts the finished video back into the original conversation.

While a job is in flight, duplicate video_generate calls in the same session return the current task status instead of starting another generation. Use openclaw tasks list or openclaw tasks show <taskId> to check progress from the CLI.

Outside of session-backed agent runs (for example, direct tool invocations), the tool falls back to inline generation and returns the final media path in the same turn.

Each video_generate request moves through four states:

  1. queued — task created, waiting for the provider to accept it.
  2. running — provider is processing (typically 30 seconds to 5 minutes depending on provider and resolution).
  3. succeeded — video ready; the agent wakes and posts it to the conversation.
  4. failed — provider error or timeout; the agent wakes with error details.

Check status from the CLI:

Terminal window
openclaw tasks list
openclaw tasks show <taskId>
openclaw tasks cancel <taskId>

Duplicate prevention: if a video task is already queued or running for the current session, video_generate returns the existing task status instead of starting a new one. Use action: "status" to check explicitly without triggering a new generation.

ProviderDefault modelTextImage refVideo refAPI key
Alibabawan2.6-t2vYesYes (remote URL)Yes (remote URL)MODELSTUDIO_API_KEY
BytePlusseedance-1-0-lite-t2v-250428Yes1 imageNoBYTEPLUS_API_KEY
ComfyUIworkflowYes1 imageNoCOMFY_API_KEY or COMFY_CLOUD_API_KEY
falfal-ai/minimax/video-01-liveYes1 imageNoFAL_KEY
Googleveo-3.1-fast-generate-previewYes1 image1 videoGEMINI_API_KEY
MiniMaxMiniMax-Hailuo-2.3Yes1 imageNoMINIMAX_API_KEY
OpenAIsora-2Yes1 image1 videoOPENAI_API_KEY
Qwenwan2.6-t2vYesYes (remote URL)Yes (remote URL)QWEN_API_KEY
Runwaygen4.5Yes1 image1 videoRUNWAYML_API_SECRET
TogetherWan-AI/Wan2.2-T2V-A14BYes1 imageNoTOGETHER_API_KEY
Vydraveo3Yes1 image (kling)NoVYDRA_API_KEY
xAIgrok-imagine-videoYes1 image1 videoXAI_API_KEY

Some providers accept additional or alternate API key env vars. See individual provider pages for details.

Run video_generate action=list to inspect available providers, models, and runtime modes at runtime.

This is the explicit mode contract used by video_generate, contract tests, and the shared live sweep.

ProvidergenerateimageToVideovideoToVideoShared live lanes today
AlibabaYesYesYesgenerate, imageToVideo; videoToVideo skipped because this provider needs remote http(s) video URLs
BytePlusYesYesNogenerate, imageToVideo
ComfyUIYesYesNoNot in the shared sweep; workflow-specific coverage lives with Comfy tests
falYesYesNogenerate, imageToVideo
GoogleYesYesYesgenerate, imageToVideo; shared videoToVideo skipped because the current buffer-backed Gemini/Veo sweep does not accept that input
MiniMaxYesYesNogenerate, imageToVideo
OpenAIYesYesYesgenerate, imageToVideo; shared videoToVideo skipped because this org/input path currently needs provider-side inpaint/remix access
QwenYesYesYesgenerate, imageToVideo; videoToVideo skipped because this provider needs remote http(s) video URLs
RunwayYesYesYesgenerate, imageToVideo; videoToVideo runs only when the selected model is runway/gen4_aleph
TogetherYesYesNogenerate, imageToVideo
VydraYesYesNogenerate; shared imageToVideo skipped because bundled veo3 is text-only and bundled kling requires a remote image URL
xAIYesYesYesgenerate, imageToVideo; videoToVideo skipped because this provider currently needs a remote MP4 URL
ParameterTypeDescription
promptstringText description of the video to generate (required for action: "generate")
ParameterTypeDescription
imagestringSingle reference image (path or URL)
imagesstring[]Multiple reference images (up to 5)
videostringSingle reference video (path or URL)
videosstring[]Multiple reference videos (up to 4)
ParameterTypeDescription
aspectRatiostring1:1, 2:3, 3:2, 3:4, 4:3, 4:5, 5:4, 9:16, 16:9, 21:9
resolutionstring480P, 720P, 768P, or 1080P
durationSecondsnumberTarget duration in seconds (rounded to nearest provider-supported value)
sizestringSize hint when the provider supports it
audiobooleanEnable generated audio when supported
watermarkbooleanToggle provider watermarking when supported
ParameterTypeDescription
actionstring"generate" (default), "status", or "list"
modelstringProvider/model override (e.g. runway/gen4.5)
filenamestringOutput filename hint

Not all providers support all parameters. OpenClaw already normalizes duration to the closest provider-supported value, and it also remaps translated geometry hints such as size-to-aspect-ratio when a fallback provider exposes a different control surface. Truly unsupported overrides are ignored on a best-effort basis and reported as warnings in the tool result. Hard capability limits (such as too many reference inputs) fail before submission.

Tool results report the applied settings. When OpenClaw remaps duration or geometry during provider fallback, the returned durationSeconds, size, aspectRatio, and resolution values reflect what was submitted, and details.normalization captures the requested-to-applied translation.

Reference inputs also select the runtime mode:

  • No reference media: generate
  • Any image reference: imageToVideo
  • Any video reference: videoToVideo

Mixed image and video references are not a stable shared capability surface. Prefer one reference type per request.

  • generate (default) — create a video from the given prompt and optional reference inputs.
  • status — check the state of the in-flight video task for the current session without starting another generation.
  • list — show available providers, models, and their capabilities.

When generating a video, OpenClaw resolves the model in this order:

  1. model tool parameter — if the agent specifies one in the call.
  2. videoGenerationModel.primary — from config.
  3. videoGenerationModel.fallbacks — tried in order.
  4. Auto-detection — uses providers that have valid auth, starting with the current default provider, then remaining providers in alphabetical order.

If a provider fails, the next candidate is tried automatically. If all candidates fail, the error includes details from each attempt.

Set agents.defaults.mediaGenerationAutoProviderFallback: false if you want video generation to use only the explicit model, primary, and fallbacks entries.

{
agents: {
defaults: {
videoGenerationModel: {
primary: "google/veo-3.1-fast-generate-preview",
fallbacks: ["runway/gen4.5", "qwen/wan2.6-t2v"],
},
},
},
}

HeyGen video-agent on fal can be pinned with:

{
agents: {
defaults: {
videoGenerationModel: {
primary: "fal/fal-ai/heygen/v2/video-agent",
},
},
},
}

Seedance 2.0 on fal can be pinned with:

{
agents: {
defaults: {
videoGenerationModel: {
primary: "fal/bytedance/seedance-2.0/fast/text-to-video",
},
},
},
}
ProviderNotes
AlibabaUses DashScope/Model Studio async endpoint. Reference images and videos must be remote http(s) URLs.
BytePlusSingle image reference only.
ComfyUIWorkflow-driven local or cloud execution. Supports text-to-video and image-to-video through the configured graph.
falUses queue-backed flow for long-running jobs. Single image reference only. Includes HeyGen video-agent and Seedance 2.0 text-to-video and image-to-video model refs.
GoogleUses Gemini/Veo. Supports one image or one video reference.
MiniMaxSingle image reference only.
OpenAIOnly size override is forwarded. Other style overrides (aspectRatio, resolution, audio, watermark) are ignored with a warning.
QwenSame DashScope backend as Alibaba. Reference inputs must be remote http(s) URLs; local files are rejected upfront.
RunwaySupports local files via data URIs. Video-to-video requires runway/gen4_aleph. Text-only runs expose 16:9 and 9:16 aspect ratios.
TogetherSingle image reference only.
VydraUses https://www.vydra.ai/api/v1 directly to avoid auth-dropping redirects. veo3 is bundled as text-to-video only; kling requires a remote image URL.
xAISupports text-to-video, image-to-video, and remote video edit/extend flows.

The shared video-generation contract now lets providers declare mode-specific capabilities instead of only flat aggregate limits. New provider implementations should prefer explicit mode blocks:

capabilities: {
generate: {
maxVideos: 1,
maxDurationSeconds: 10,
supportsResolution: true,
},
imageToVideo: {
enabled: true,
maxVideos: 1,
maxInputImages: 1,
maxDurationSeconds: 5,
},
videoToVideo: {
enabled: true,
maxVideos: 1,
maxInputVideos: 1,
maxDurationSeconds: 5,
},
}

Flat aggregate fields such as maxInputImages and maxInputVideos are not enough to advertise transform-mode support. Providers should declare generate, imageToVideo, and videoToVideo explicitly so live tests, contract tests, and the shared video_generate tool can validate mode support deterministically.

Opt-in live coverage for the shared bundled providers:

Terminal window
OPENCLAW_LIVE_TEST=1 pnpm test:live -- extensions/video-generation-providers.live.test.ts

Repo wrapper:

Terminal window
pnpm test:live:media video

This live file loads missing provider env vars from ~/.profile, prefers live/env API keys ahead of stored auth profiles by default, and runs the declared modes it can exercise safely with local media:

  • generate for every provider in the sweep
  • imageToVideo when capabilities.imageToVideo.enabled
  • videoToVideo when capabilities.videoToVideo.enabled and the provider/model accepts buffer-backed local video input in the shared sweep

Today the shared videoToVideo live lane covers:

  • runway only when you select runway/gen4_aleph

Set the default video generation model in your OpenClaw config:

{
agents: {
defaults: {
videoGenerationModel: {
primary: "qwen/wan2.6-t2v",
fallbacks: ["qwen/wan2.6-r2v-flash"],
},
},
},
}

Or via the CLI:

Terminal window
openclaw config set agents.defaults.videoGenerationModel.primary "qwen/wan2.6-t2v"