> ## Documentation Index
> Fetch the complete documentation index at: https://docs.lumenfall.ai/llms.txt
> Use this file to discover all available pages before exploring further.

# Generate videos

> Create videos from text or image prompts

Generate videos from a text prompt or input image using AI models from various providers.

<Info>
  **OpenAI compatibility**

  This endpoint implements the [OpenAI Videos API](https://platform.openai.com/docs/api-reference/videos/create). You can use any [OpenAI SDK](/client-libraries/openai-sdk) by changing the base URL to `https://api.lumenfall.ai/openai/v1`.

  Lumenfall [normalizes behavior](/unified-model-behavior) across all models - mapping parameters, emulating features, and standardizing errors - so your code works consistently regardless of which provider handles the request.
</Info>

<Info>
  **Async workflow**

  Video generation is asynchronous. A successful request returns a `202` response with a video object in `queued` status. Poll `GET /v1/videos/{id}` until the status is `completed` or `failed`. You can also use [webhooks](/webhooks) to receive a notification when the video is ready.
</Info>

<Info>
  **Content types**

  This endpoint accepts both `application/json` and `multipart/form-data` requests. Use multipart when you want to upload image files directly instead of passing URLs.
</Info>

## Request body

<Note>
  You can include additional parameters not listed here. They will be passed through to the underlying provider.
</Note>

<Accordion title="What do the parameter badges mean?">
  Each parameter has a badge showing how Lumenfall handles it across different providers:

  | Badge                      | Meaning                                                                            |
  | -------------------------- | ---------------------------------------------------------------------------------- |
  | <Badge>Passthrough</Badge> | Passed as-is; some providers may ignore it                                         |
  | <Badge>Renamed</Badge>     | Field name is mapped to the provider's expected name                               |
  | <Badge>Converted</Badge>   | Value is transformed to match each provider's format                               |
  | <Badge>Emulated</Badge>    | Works consistently on all models, even if the provider doesn't natively support it |

  Learn more about [unified model behavior](/unified-model-behavior#parameter-support).
</Accordion>

<ParamField body="prompt" type="string" required>
  A text description of the desired video. Maximum length varies by model.

  <Badge>Renamed</Badge>
</ParamField>

<ParamField body="model" type="string" required>
  The model to use for video generation. See [Models](/models).
</ParamField>

<ParamField body="seconds" type="string or number">
  Duration of the video in seconds. Also accepted as `duration`.

  <Badge>Converted</Badge>
</ParamField>

<ParamField body="size" type="string">
  The dimensions of the generated video, as `WIDTHxHEIGHT` (e.g., `1920x1080`) or aspect ratio (e.g., `16:9`). Supported sizes vary by model.

  <Badge>Converted</Badge>
</ParamField>

<ParamField body="n" type="integer" default="1">
  The number of videos to generate. Must be between 1 and 4.

  <Badge>Emulated</Badge>
</ParamField>

<ParamField body="aspect_ratio" type="string">
  Aspect ratio for the video (e.g., `16:9`, `9:16`, `1:1`). Lumenfall extension - converted to the provider's native format.

  <Badge>Converted</Badge>
</ParamField>

<ParamField body="resolution" type="string">
  Video resolution shorthand: `720p`, `1080p`. Converted to appropriate dimensions per provider.

  <Badge>Converted</Badge>
</ParamField>

<ParamField body="input_reference" type="object or array">
  Reference image(s) for image-to-video generation. Not all models support this - check the model's capabilities.

  Accepts a single object or an array of objects:

  ```json theme={null}
  // Single reference
  {"image_url": "https://example.com/photo.jpg"}

  // Multiple references
  [{"image_url": "https://..."}, {"image_url": "https://..."}]
  ```

  `image_url` can be an HTTPS URL or a base64 data URI. The number of references accepted depends on the model (most models support at most 1).

  When using `multipart/form-data`, send file uploads or URL strings as `input_reference` fields instead. Multiple files are supported via `input_reference`, `input_reference[]`, or `input_reference[N]` field names.

  <Badge>Renamed</Badge>
</ParamField>

<ParamField body="negative_prompt" type="string">
  Text describing what to avoid in the video. Maximum 5000 characters.

  <Badge>Renamed</Badge>
</ParamField>

<ParamField body="media_retention" type="string">
  How long to retain generated media. See [Media retention](/media-retention).
</ParamField>

<ParamField body="webhook_url" type="string">
  URL to receive a webhook notification when the video completes or fails. Deliveries are signed with your organization's webhook secret - retrieve it via [Get webhook secret](/api-reference/webhooks/secret). See [Webhooks](/webhooks) for payload format and verification.
</ParamField>

<ParamField body="idempotency_key" type="string">
  A unique key (up to 256 characters) to prevent duplicate requests. If you send the same key twice, the second request returns the existing video instead of creating a new one.
</ParamField>

<ParamField body="metadata" type="object">
  Key-value pairs of strings to attach to the video object.
</ParamField>

<ParamField body="user" type="string">
  A unique identifier representing your end-user. Only used by some providers.

  <Badge>Passthrough</Badge>
</ParamField>

## Query parameters

<ParamField query="dryRun" type="boolean" default="false">
  If `true`, returns a cost estimate without generating the video. See [Cost estimation](/api-reference/cost-estimation).
</ParamField>

## Response

Returns a `202 Accepted` response with the video object.

<ResponseField name="id" type="string">
  Unique identifier for the video.
</ResponseField>

<ResponseField name="object" type="string">
  Always `"video"`.
</ResponseField>

<ResponseField name="created_at" type="integer">
  Unix timestamp of when the video was created.
</ResponseField>

<ResponseField name="status" type="string">
  The generation status. One of `queued`, `in_progress`, `completed`, or `failed`.
</ResponseField>

<ResponseField name="model" type="string">
  The model used for generation.
</ResponseField>

<ResponseField name="seconds" type="string">
  The requested duration.
</ResponseField>

<ResponseField name="size" type="string">
  Output dimensions as `"WIDTHxHEIGHT"` (e.g., `"1920x1080"`) or aspect ratio (e.g., `"16:9"`).
</ResponseField>

<ResponseField name="metadata" type="object">
  Metadata about the request execution, including cost estimates. See [Billing](/billing#effective-cost-on-responses).

  <Expandable title="Metadata object">
    <ResponseField name="provider_name" type="string">
      Provider display name (e.g., `"Google Vertex AI"`).
    </ResponseField>

    <ResponseField name="provider" type="string">
      Provider slug (e.g., `"replicate"`).
    </ResponseField>

    <ResponseField name="upstream_id" type="string">
      The provider's job ID, useful for reconciliation.
    </ResponseField>

    <ResponseField name="model" type="string">
      The model string sent in the request.
    </ResponseField>

    <ResponseField name="executed_model" type="string">
      The model that was actually executed, as `"{provider_slug}/{provider_model}"`.
    </ResponseField>

    <ResponseField name="cost" type="number">
      Final effective cost. Only present when the job is `completed`.
    </ResponseField>

    <ResponseField name="cost_estimate" type="number">
      Estimated cost. Present while the job is `queued` or `in_progress`.
    </ResponseField>

    <ResponseField name="cost_currency" type="string">
      Currency of the cost (e.g., `"USD"`).
    </ResponseField>
  </Expandable>
</ResponseField>

<RequestExample>
  ```bash cURL (text-to-video) theme={null}
  curl -X POST https://api.lumenfall.ai/openai/v1/videos \
    -H "Authorization: Bearer $LUMENFALL_API_KEY" \
    -H "Content-Type: application/json" \
    -d '{
      "model": "sora-2",
      "prompt": "A capybara lounging in a hot spring, steam rising gently, slow camera pan",
      "seconds": 10,
      "size": "1920x1080"
    }'
  ```

  ```bash cURL (image-to-video, JSON) theme={null}
  curl -X POST https://api.lumenfall.ai/openai/v1/videos \
    -H "Authorization: Bearer $LUMENFALL_API_KEY" \
    -H "Content-Type: application/json" \
    -d '{
      "model": "sora-2",
      "prompt": "The capybara turns its head and blinks slowly",
      "seconds": 5,
      "input_reference": {
        "image_url": "https://example.com/capybara.jpg"
      }
    }'
  ```

  ```bash cURL (image-to-video, multipart) theme={null}
  curl -X POST https://api.lumenfall.ai/openai/v1/videos \
    -H "Authorization: Bearer $LUMENFALL_API_KEY" \
    -F model=sora-2 \
    -F prompt="The capybara turns its head and blinks slowly" \
    -F seconds=5 \
    -F input_reference=@capybara.jpg
  ```

  ```python Python theme={null}
  from openai import OpenAI

  client = OpenAI(
      api_key="your-lumenfall-api-key",
      base_url="https://api.lumenfall.ai/openai/v1"
  )

  # Text-to-video
  video = client.videos.create(
      model="sora-2",
      prompt="A capybara lounging in a hot spring, steam rising gently, slow camera pan",
      seconds=10,
      size="1920x1080",
  )

  print(video.id)  # Use this ID to poll for status
  ```

  ```typescript JavaScript / TypeScript theme={null}
  import OpenAI from "openai";

  const client = new OpenAI({
    apiKey: "your-lumenfall-api-key",
    baseURL: "https://api.lumenfall.ai/openai/v1",
  });

  // Text-to-video
  const video = await client.videos.create({
    model: "sora-2",
    prompt:
      "A capybara lounging in a hot spring, steam rising gently, slow camera pan",
    seconds: 10,
    size: "1920x1080",
  });

  console.log(video.id); // Use this ID to poll for status
  ```
</RequestExample>

<ResponseExample>
  ```json 202 Response theme={null}
  {
    "id": "video_abc123",
    "object": "video",
    "created_at": 1702345678,
    "status": "queued",
    "model": "sora-2",
    "seconds": "10",
    "size": "1920x1080",
    "metadata": {
      "model": "sora-2",
      "executed_model": "openai/sora-2",
      "provider": "openai",
      "provider_name": "OpenAI",
      "cost_estimate": 0.21,
      "cost_currency": "USD"
    }
  }
  ```
</ResponseExample>
