Use this file to discover all available pages before exploring further.
Generate videos from a text prompt or input image using AI models from various providers.
OpenAI compatibilityThis endpoint implements the OpenAI Videos API. You can use any OpenAI SDK by changing the base URL to https://api.lumenfall.ai/openai/v1.Lumenfall normalizes behavior across all models - mapping parameters, emulating features, and standardizing errors - so your code works consistently regardless of which provider handles the request.
Async workflowVideo generation is asynchronous. A successful request returns a 202 response with a video object in queued status. Poll GET /v1/videos/{id} until the status is completed or failed. You can also use webhooks to receive a notification when the video is ready.
Content typesThis endpoint accepts both application/json and multipart/form-data requests. Use multipart when you want to upload image files directly instead of passing URLs.
Reference image(s) for image-to-video generation. Not all models support this - check the model’s capabilities.Accepts a single object or an array of objects:
// Single reference{"image_url": "https://example.com/photo.jpg"}// Multiple references[{"image_url": "https://..."}, {"image_url": "https://..."}]
image_url can be an HTTPS URL or a base64 data URI. The number of references accepted depends on the model (most models support at most 1).When using multipart/form-data, send file uploads or URL strings as input_reference fields instead. Multiple files are supported via input_reference, input_reference[], or input_reference[N] field names.Renamed
URL to receive a webhook notification when the video completes or fails. Deliveries are signed with your organization’s webhook secret - retrieve it via Get webhook secret. See Webhooks for payload format and verification.
A unique key (up to 256 characters) to prevent duplicate requests. If you send the same key twice, the second request returns the existing video instead of creating a new one.