> ## Documentation Index
> Fetch the complete documentation index at: https://docs.lumenfall.ai/llms.txt
> Use this file to discover all available pages before exploring further.

# Edit images

> Edit images using text prompts

Edit or extend images using AI models from various providers. This endpoint accepts `multipart/form-data` requests for file uploads.

<Info>
  **OpenAI Compatibility**

  This endpoint implements the [OpenAI Images Edit API](https://platform.openai.com/docs/api-reference/images/createEdit). You can use any [OpenAI SDK](/client-libraries/openai-sdk) by changing the base URL to `https://api.lumenfall.ai/openai/v1`.

  Lumenfall [normalizes behavior](/unified-model-behavior) across all models - mapping parameters, emulating features, and standardizing errors - so your code works consistently regardless of which provider handles the request.
</Info>

## Request body

<Note>
  You can include additional parameters not listed here. They will be passed through to the underlying provider.
</Note>

<Accordion title="What do the parameter badges mean?">
  Each parameter has a badge showing how Lumenfall handles it across different providers:

  | Badge                      | Meaning                                                                            |
  | -------------------------- | ---------------------------------------------------------------------------------- |
  | <Badge>Passthrough</Badge> | Passed as-is; some providers may ignore it                                         |
  | <Badge>Renamed</Badge>     | Field name is mapped to the provider's expected name                               |
  | <Badge>Converted</Badge>   | Value is transformed to match each provider's format                               |
  | <Badge>Emulated</Badge>    | Works consistently on all models, even if the provider doesn't natively support it |

  Learn more about [unified model behavior](/unified-model-behavior#parameter-support).
</Accordion>

<ParamField body="image" type="file or array" required>
  The image(s) to edit. Must be a supported image file (PNG, WebP, or JPG) or an array of images. Maximum file size varies by model.

  <Badge>Renamed</Badge>
</ParamField>

<ParamField body="prompt" type="string" required>
  A text description of the desired edit. Maximum length varies by model.

  <Badge>Renamed</Badge>
</ParamField>

<ParamField body="model" type="string" required>
  The model to use for image editing. See [Models](/models).
</ParamField>

<ParamField body="mask" type="file">
  An image whose fully transparent areas (where alpha is zero) indicate where the image should be edited. Must be a valid PNG file with the same dimensions as the source image.

  <Badge>Passthrough</Badge>
</ParamField>

<ParamField body="n" type="integer" default="1">
  The number of images to generate. Must be between 1 and 10. Some models only support `n=1`.

  <Badge>Emulated</Badge>
</ParamField>

<ParamField body="size" type="string" default="1024x1024">
  The size of the generated images. Supported sizes vary by model:

  * `256x256`
  * `512x512`
  * `1024x1024`
  * `1024x1536` (portrait)
  * `1536x1024` (landscape)

  <Badge>Converted</Badge>
</ParamField>

<ParamField body="quality" type="string" default="auto">
  The quality of the image. Options vary by model: `auto`, `low`, `medium`, `high`, `standard`, `hd`.

  <Badge>Passthrough</Badge>
</ParamField>

<ParamField body="response_format" type="string" default="url">
  The format of the generated images. Options:

  * `url` — Returns a URL to the generated image
  * `b64_json` — Returns the image as base64-encoded JSON

  <Badge>Emulated</Badge>
</ParamField>

<ParamField body="output_format" type="string" default="png">
  The image file format to generate. Lumenfall supports more formats than OpenAI:

  * `png` Lossless compression, supports transparency
  * `jpeg` — Lossy compression, smaller file size
  * `gif` — Supports animation and transparency
  * `webp` — Modern format with good compression
  * `avif` — Best compression, modern browsers only (Limited to 1,600px on the longest side. Larger images will fall back to the original format.)

  If the provider returns a different format, Lumenfall automatically converts the image.

  <Badge>Emulated</Badge>
</ParamField>

<ParamField body="output_compression" type="integer" default="100">
  Compression quality for lossy formats (`jpeg`, `webp`, `avif`). Range: 1-100, where 100 is highest quality.

  <Badge>Emulated</Badge>
</ParamField>

<ParamField body="user" type="string">
  A unique identifier representing your end-user. Only used by some providers.

  <Badge>Passthrough</Badge>
</ParamField>

## Query parameters

<ParamField query="dryRun" type="boolean" default="false">
  If `true`, returns a cost estimate without editing the image. See [Cost estimation](/api-reference/cost-estimation).
</ParamField>

## Response

<ResponseField name="created" type="integer">
  Unix timestamp of when the request was created.
</ResponseField>

<ResponseField name="size" type="string">
  Actual output dimensions as `"WIDTHxHEIGHT"` (e.g., `"1024x1024"`). Extracted from the generated image. May differ from the requested `size` if the model produced a different resolution.
</ResponseField>

<ResponseField name="data" type="array">
  Array of generated image objects.

  <Expandable title="Image object">
    <ResponseField name="url" type="string">
      URL of the generated image. Only present if `response_format` is `url`.
    </ResponseField>

    <ResponseField name="b64_json" type="string">
      Base64-encoded image data. Only present if `response_format` is `b64_json`.
    </ResponseField>

    <ResponseField name="revised_prompt" type="string">
      The prompt that was used to generate the image, if the model revised the original prompt.
    </ResponseField>
  </Expandable>
</ResponseField>

<ResponseField name="metadata" type="object">
  Metadata about the request execution, including effective cost. See [Billing](/billing#effective-cost-on-responses).

  <Expandable title="Metadata object">
    <ResponseField name="provider_name" type="string">
      Provider display name (e.g., `"Google Vertex AI"`).
    </ResponseField>

    <ResponseField name="provider" type="string">
      Provider slug (e.g., `"vertex"`).
    </ResponseField>

    <ResponseField name="upstream_id" type="string">
      The provider's request ID, useful for reconciliation.
    </ResponseField>

    <ResponseField name="model" type="string">
      The model string sent in the request.
    </ResponseField>

    <ResponseField name="executed_model" type="string">
      The model that was actually executed, as `"{provider_slug}/{provider_model}"`.
    </ResponseField>

    <ResponseField name="cost" type="number">
      Effective cost in the currency specified by `cost_currency`.
    </ResponseField>

    <ResponseField name="cost_currency" type="string">
      Currency of the cost (e.g., `"USD"`).
    </ResponseField>
  </Expandable>
</ResponseField>

<RequestExample>
  ```bash cURL theme={null}
  curl https://api.lumenfall.ai/openai/v1/images/edits \
    -H "Authorization: Bearer $LUMENFALL_API_KEY" \
    -F "model=gemini-3-pro-image" \
    -F "image=@original.png" \
    -F "prompt=Add a red hat to the person" \
    -F "size=1024x1024"
  ```

  ```python Python theme={null}
  from openai import OpenAI

  client = OpenAI(
      api_key="your-lumenfall-api-key",
      base_url="https://api.lumenfall.ai/openai/v1"
  )

  response = client.images.edit(
      model="gemini-3-pro-image",
      image=open("original.png", "rb"),
      prompt="Add a red hat to the person",
      size="1024x1024"
  )

  print(response.data[0].url)
  ```

  ```typescript TypeScript theme={null}
  import OpenAI from "openai";
  import fs from "fs";

  const client = new OpenAI({
    apiKey: "your-lumenfall-api-key",
    baseURL: "https://api.lumenfall.ai/openai/v1",
  });

  const response = await client.images.edit({
    model: "gemini-3-pro-image",
    image: fs.createReadStream("original.png"),
    prompt: "Add a red hat to the person",
    size: "1024x1024",
  });

  console.log(response.data[0].url);
  ```

  ```bash cURL (multiple images) theme={null}
  curl https://api.lumenfall.ai/openai/v1/images/edits \
    -H "Authorization: Bearer $LUMENFALL_API_KEY" \
    -F "model=gemini-3-pro-image" \
    -F "image[]=@lotion.png" \
    -F "image[]=@candle.png" \
    -F "image[]=@soap.png" \
    -F "prompt=Create a gift basket with these items"
  ```

  ```python Python (multiple images) theme={null}
  from openai import OpenAI

  client = OpenAI(
      api_key="your-lumenfall-api-key",
      base_url="https://api.lumenfall.ai/openai/v1"
  )

  response = client.images.edit(
      model="gemini-3-pro-image",
      image=[
          open("lotion.png", "rb"),
          open("candle.png", "rb"),
          open("soap.png", "rb"),
      ],
      prompt="Create a gift basket with these items"
  )

  print(response.data[0].url)
  ```

  ```typescript TypeScript (multiple images) theme={null}
  import OpenAI from "openai";
  import fs from "fs";

  const client = new OpenAI({
    apiKey: "your-lumenfall-api-key",
    baseURL: "https://api.lumenfall.ai/openai/v1",
  });

  const response = await client.images.edit({
    model: "gemini-3-pro-image",
    image: [
      fs.createReadStream("lotion.png"),
      fs.createReadStream("candle.png"),
      fs.createReadStream("soap.png"),
    ],
    prompt: "Create a gift basket with these items",
  });

  console.log(response.data[0].url);
  ```
</RequestExample>

<ResponseExample>
  ```json Response theme={null}
  {
    "created": 1702345678,
    "size": "1024x1024",
    "data": [
      {
        "url": "https://media.lumenfall.ai/abc123.png",
        "revised_prompt": "Add a stylish red fedora hat to the person in the image"
      }
    ],
    "metadata": {
      "model": "gemini-3-pro-image",
      "executed_model": "vertex/gemini-3-pro-image",
      "provider": "vertex",
      "provider_name": "Google Vertex AI",
      "cost": 0.04,
      "cost_currency": "USD"
    }
  }
  ```
</ResponseExample>
