# Get balance Source: https://docs.lumenfall.ai/api-reference/balance GET https://api.lumenfall.ai/v1/balance Retrieve your current account balance and billing type Returns the current balance and billing type for your account. This endpoint uses the native Lumenfall API path (`/v1/balance`), not the OpenAI-compatible prefix (`/openai/v1`). ## Response Always `balance`. Your account's billing type - either `prepaid` or `postpaid`. The available balance. Available balance in USD. `null` for postpaid accounts or if the balance is temporarily unavailable. Always `usd`. ```bash cURL theme={null} curl https://api.lumenfall.ai/v1/balance \ -H "Authorization: Bearer $LUMENFALL_API_KEY" ``` ```python Python theme={null} import requests response = requests.get( "https://api.lumenfall.ai/v1/balance", headers={"Authorization": "Bearer your-lumenfall-api-key"} ) print(response.json()) ``` ```typescript TypeScript theme={null} const response = await fetch("https://api.lumenfall.ai/v1/balance", { headers: { Authorization: "Bearer your-lumenfall-api-key" }, }); const balance = await response.json(); console.log(balance); ``` ```json Prepaid theme={null} { "object": "balance", "billing_type": "prepaid", "available": { "amount": 12.50, "currency": "usd" } } ``` ```json Postpaid theme={null} { "object": "balance", "billing_type": "postpaid", "available": { "amount": null, "currency": "usd" } } ``` # Create chat completion Source: https://docs.lumenfall.ai/api-reference/chat/completions POST https://api.lumenfall.ai/openai/v1/chat/completions Generate text responses from a conversation Modern media applications don't just generate images and videos — they also make dozens of LLM calls for prompting, captioning, moderation, and orchestration. Instead of juggling a separate provider or router for text, you can use the same Lumenfall SDK, API key, and base URL you already use for media generation. One platform, one bill, no context-switching. **Powered by OpenRouter** Text completions are routed through [OpenRouter](https://openrouter.ai/docs/api/api-reference/chat/send-chat-completion-request), giving you access to all hundreds of models available on their platform — from OpenAI, Google, Anthropic, Meta, Mistral, and many more providers. All OpenRouter features are fully supported. Use any model by passing its [OpenRouter model identifier](https://openrouter.ai/models) (e.g., `google/gemini-3-flash-preview`). You can optionally prefix with `openrouter/` (e.g., `openrouter/google/gemini-3-flash-preview`), but it is not required. **OpenAI compatibility** This endpoint implements the [OpenAI Chat Completions API](https://platform.openai.com/docs/api-reference/chat). You can use any [OpenAI SDK](/client-libraries/openai-sdk) by changing the base URL to `https://api.lumenfall.ai/openai/v1`. ## Request body You can include additional parameters not listed here. They will be passed through to the underlying provider. The model to use. Pass any [OpenRouter model identifier](https://openrouter.ai/models) — for example, `google/gemini-3-flash-preview` or `openai/gpt-5.4`. A list of messages comprising the conversation. Each message has a `role` and `content`. The role of the message author. One of `system`, `user`, `assistant`, or `tool`. The content of the message. Can be a string, an array of content parts (for multimodal input), or `null` (for assistant messages with tool calls). Content parts support `text` and `image_url` types: ```json theme={null} [ { "type": "text", "text": "What's in this image?" }, { "type": "image_url", "image_url": { "url": "https://example.com/image.png" } } ] ``` An optional name for the participant. Tool calls generated by the model (assistant messages only). The ID of the tool call this message is responding to (tool messages only). If `true`, the response is sent as [server-sent events](https://developer.mozilla.org/en-US/docs/Web/API/Server-sent_events) (SSE). Partial message deltas are sent as `data: {json}` lines, ending with `data: [DONE]`. Sampling temperature between 0 and 2. Higher values make output more random, lower values make it more focused. The maximum number of tokens to generate. Nucleus sampling parameter. Only consider tokens with cumulative probability up to this value. Penalizes tokens based on their frequency in the text so far. Range: -2.0 to 2.0. Penalizes tokens based on whether they appear in the text so far. Range: -2.0 to 2.0. Up to 4 sequences where the model will stop generating. A list of tools the model may call. Currently only `function` type tools are supported. The type of tool. Currently only `function` is supported. The name of the function. A description of what the function does. The parameters the function accepts, described as a JSON Schema object. Controls which tool the model calls. Options: * `"none"` - Do not call any tool * `"auto"` - Model decides whether to call a tool * `"required"` - Model must call a tool * `{"type": "function", "function": {"name": "my_function"}}` - Call a specific function The format of the response. Set `{"type": "json_object"}` to enable JSON mode. A seed for deterministic generation. Not all models support this. A unique identifier representing your end-user. Whether to return log probabilities of the output tokens. Number of most likely tokens to return at each position (0-20). Requires `logprobs: true`. Options for streaming responses. If `true`, an additional chunk is sent with usage information when streaming. ## Response A unique identifier for the chat completion. Always `chat.completion`. Unix timestamp of when the completion was created. The model used for the completion. A list of chat completion choices. The index of the choice. The generated message. Always `assistant`. The generated text content, or `null` if the model called a tool. Tool calls made by the model, if any. The reason the model stopped generating. One of `stop`, `length`, `tool_calls`, or `content_filter`. Token usage statistics for the request. Number of tokens in the prompt. Number of tokens in the generated completion. Total number of tokens used. ## Streaming When `stream: true` is set, the response is sent as server-sent events. Each event contains a `chat.completion.chunk` object with a `delta` field instead of `message`: ```json theme={null} data: {"id":"chatcmpl-abc123","object":"chat.completion.chunk","created":1702345678,"model":"google/gemini-3-flash-preview","choices":[{"index":0,"delta":{"role":"assistant"},"finish_reason":null}]} data: {"id":"chatcmpl-abc123","object":"chat.completion.chunk","created":1702345678,"model":"google/gemini-3-flash-preview","choices":[{"index":0,"delta":{"content":"Capybaras"},"finish_reason":null}]} data: {"id":"chatcmpl-abc123","object":"chat.completion.chunk","created":1702345678,"model":"google/gemini-3-flash-preview","choices":[{"index":0,"delta":{},"finish_reason":"stop"}]} data: [DONE] ``` ```bash HTTP theme={null} curl https://api.lumenfall.ai/openai/v1/chat/completions \ -H "Authorization: Bearer $LUMENFALL_API_KEY" \ -H "Content-Type: application/json" \ -d '{ "model": "google/gemini-3-flash-preview", "messages": [ {"role": "user", "content": "Why are capybaras so chill?"} ] }' ``` ```python Python theme={null} from openai import OpenAI client = OpenAI( api_key="your-lumenfall-api-key", base_url="https://api.lumenfall.ai/openai/v1" ) response = client.chat.completions.create( model="google/gemini-3-flash-preview", messages=[ {"role": "user", "content": "Why are capybaras so chill?"} ] ) print(response.choices[0].message.content) ``` ```typescript JavaScript / TypeScript theme={null} import OpenAI from "openai"; const client = new OpenAI({ apiKey: "your-lumenfall-api-key", baseURL: "https://api.lumenfall.ai/openai/v1", }); const response = await client.chat.completions.create({ model: "google/gemini-3-flash-preview", messages: [ { role: "user", content: "Why are capybaras so chill?" }, ], }); console.log(response.choices[0].message.content); ``` ```go Go theme={null} package main import ( "context" "fmt" "github.com/openai/openai-go" "github.com/openai/openai-go/option" ) func main() { client := openai.NewClient( option.WithAPIKey("your-lumenfall-api-key"), option.WithBaseURL("https://api.lumenfall.ai/openai/v1"), ) response, err := client.Chat.Completions.New(context.Background(), openai.ChatCompletionNewParams{ Model: openai.F("google/gemini-3-flash-preview"), Messages: openai.F([]openai.ChatCompletionMessageParamUnion{ openai.UserMessage("Why are capybaras so chill?"), }), }) if err != nil { panic(err) } fmt.Println(response.Choices[0].Message.Content) } ``` ```csharp C# / .NET theme={null} using OpenAI; using OpenAI.Chat; var options = new OpenAIClientOptions { Endpoint = new Uri("https://api.lumenfall.ai/openai/v1") }; var client = new OpenAIClient("your-lumenfall-api-key", options); var chatClient = client.GetChatClient("google/gemini-3-flash-preview"); ChatCompletion response = await chatClient.CompleteChatAsync( [new UserChatMessage("Why are capybaras so chill?")] ); Console.WriteLine(response.Content[0].Text); ``` ```java Java theme={null} import com.openai.client.OpenAIClient; import com.openai.client.okhttp.OpenAIOkHttpClient; import com.openai.models.ChatCompletionCreateParams; import com.openai.models.ChatCompletionUserMessageParam; OpenAIClient client = OpenAIOkHttpClient.builder() .apiKey("your-lumenfall-api-key") .baseUrl("https://api.lumenfall.ai/openai/v1") .build(); var params = ChatCompletionCreateParams.builder() .model("google/gemini-3-flash-preview") .addMessage(ChatCompletionUserMessageParam.builder() .content("Why are capybaras so chill?") .build()) .build(); var response = client.chat().completions().create(params); System.out.println(response.choices().get(0).message().content().orElse(null)); ``` ```ruby Ruby theme={null} require "openai" client = OpenAI::Client.new( api_key: "your-lumenfall-api-key", base_url: "https://api.lumenfall.ai/openai/v1" ) response = client.chat.completions.create( model: "google/gemini-3-flash-preview", messages: [ { role: "user", content: "Why are capybaras so chill?" } ] ) puts response.choices.first.message.content ``` ```json Response theme={null} { "id": "chatcmpl-abc123", "object": "chat.completion", "created": 1702345678, "model": "google/gemini-3-flash-preview", "choices": [ { "index": 0, "message": { "role": "assistant", "content": "Capybaras are remarkably calm animals for several reasons. As the largest rodents in the world, they have few natural predators in their South American habitats, which means they haven't evolved a strong flight-or-fight response. They're also highly social and semi-aquatic, spending much of their time lounging in warm water - which would make anyone relaxed!" }, "finish_reason": "stop" } ], "usage": { "prompt_tokens": 14, "completion_tokens": 71, "total_tokens": 85 } } ``` # Cost estimation Source: https://docs.lumenfall.ai/api-reference/cost-estimation Estimate request costs before executing Use dry run mode to estimate the cost of a request without executing it. This validates your request parameters and returns pricing information so you can make informed decisions before generating. ## Making a dry run request Add `?dryRun=true` as a query parameter to any request (chat completions, image generation, or image editing): ```bash cURL theme={null} curl "https://api.lumenfall.ai/openai/v1/images/generations?dryRun=true" \ -H "Authorization: Bearer $LUMENFALL_API_KEY" \ -H "Content-Type: application/json" \ -d '{ "model": "gemini-3-pro-image", "prompt": "A capybara relaxing in a hot spring", "size": "1024x1024", "n": 2 }' ``` ```python Python theme={null} import requests response = requests.post( "https://api.lumenfall.ai/openai/v1/images/generations?dryRun=true", headers={"Authorization": f"Bearer {api_key}"}, json={ "model": "gemini-3-pro-image", "prompt": "A capybara relaxing in a hot spring", "size": "1024x1024", "n": 2 } ) estimate = response.json() cost_dollars = estimate["total_cost_micros"] / 1_000_000 print(f"Estimated cost: ${cost_dollars:.4f}") ``` ```typescript TypeScript theme={null} const response = await fetch( "https://api.lumenfall.ai/openai/v1/images/generations?dryRun=true", { method: "POST", headers: { Authorization: `Bearer ${apiKey}`, "Content-Type": "application/json", }, body: JSON.stringify({ model: "gemini-3-pro-image", prompt: "A capybara relaxing in a hot spring", size: "1024x1024", n: 2, }), } ); const estimate = await response.json(); const costDollars = estimate.total_cost_micros / 1_000_000; console.log(`Estimated cost: $${costDollars.toFixed(4)}`); ``` Dry run mode works with [chat completions](/api-reference/chat/completions), [image generation](/api-reference/images/generate), and [image editing](/api-reference/images/edit) endpoints. ## Response format Dry run requests return a cost estimate instead of generated images: ```json theme={null} { "estimated": true, "model": "gemini-3-pro-image", "provider": "vertex", "total_cost_micros": 80000, "currency": "USD", "components": [ { "type": "output", "metric": "image", "quantity": 2, "billable_quantity": 2, "unit_price": 0.04, "total_cost": 80000 } ] } ``` ### Response fields | Field | Type | Description | | ------------------- | ------- | ---------------------------------------------------------------------------------------------------- | | `estimated` | boolean | Always `true` for dry run responses | | `model` | string | The model that would be used | | `provider` | string | The provider that would handle the request (may differ on actual request as routing is re-evaluated) | | `total_cost_micros` | integer | Total estimated cost in micros (1/1,000,000 USD) | | `currency` | string | Currency code (always `USD`) | | `components` | array | Breakdown of cost components | ### Cost components Each component in the `components` array contains: | Field | Type | Description | | ------------------- | ------- | ----------------------------------------------- | | `type` | string | Component type (e.g., `output`, `input`) | | `metric` | string | What is being measured (e.g., `image`, `token`) | | `quantity` | integer | Number of units requested | | `billable_quantity` | integer | Number of units that will be billed | | `unit_price` | number | Price per unit in USD | | `total_cost` | integer | Component cost in micros | ## Converting micros to dollars Costs are returned in micros (millionths of a dollar) for precision. To convert to dollars: ```python theme={null} cost_dollars = total_cost_micros / 1_000_000 ``` For example, `80000` micros equals `$0.08`. ## Notes Cost estimates are approximate. Effective pricing is calculated after the request runs, because final costs may depend on outputs (such as token counts or the number of images generated). Dry run requests: * Validate your request parameters * Do not execute the request (no text generation or image creation) * Do not affect your account balance * Return quickly since no generation occurs # Edit images Source: https://docs.lumenfall.ai/api-reference/images/edit POST https://api.lumenfall.ai/openai/v1/images/edits Edit images using text prompts Edit or extend images using AI models from various providers. This endpoint accepts `multipart/form-data` requests for file uploads. **OpenAI Compatibility** This endpoint implements the [OpenAI Images Edit API](https://platform.openai.com/docs/api-reference/images/createEdit). You can use any [OpenAI SDK](/client-libraries/openai-sdk) by changing the base URL to `https://api.lumenfall.ai/openai/v1`. Lumenfall [normalizes behavior](/unified-model-behavior) across all models - mapping parameters, emulating features, and standardizing errors - so your code works consistently regardless of which provider handles the request. ## Request body You can include additional parameters not listed here. They will be passed through to the underlying provider. Each parameter has a badge showing how Lumenfall handles it across different providers: | Badge | Meaning | | -------------------------- | ---------------------------------------------------------------------------------- | | Passthrough | Passed as-is; some providers may ignore it | | Renamed | Field name is mapped to the provider's expected name | | Converted | Value is transformed to match each provider's format | | Emulated | Works consistently on all models, even if the provider doesn't natively support it | Learn more about [unified model behavior](/unified-model-behavior#parameter-support). The image(s) to edit. Must be a supported image file (PNG, WebP, or JPG) or an array of images. Maximum file size varies by model. Renamed A text description of the desired edit. Maximum length varies by model. Renamed The model to use for image editing. See [Models](/models). An image whose fully transparent areas (where alpha is zero) indicate where the image should be edited. Must be a valid PNG file with the same dimensions as the source image. Passthrough The number of images to generate. Must be between 1 and 10. Some models only support `n=1`. Emulated The size of the generated images. Supported sizes vary by model: * `256x256` * `512x512` * `1024x1024` * `1024x1536` (portrait) * `1536x1024` (landscape) Converted The quality of the image. Options vary by model: `auto`, `low`, `medium`, `high`, `standard`, `hd`. Passthrough The format of the generated images. Options: * `url` — Returns a URL to the generated image * `b64_json` — Returns the image as base64-encoded JSON Emulated The image file format to generate. Lumenfall supports more formats than OpenAI: * `png` Lossless compression, supports transparency * `jpeg` — Lossy compression, smaller file size * `gif` — Supports animation and transparency * `webp` — Modern format with good compression * `avif` — Best compression, modern browsers only (Limited to 1,600px on the longest side. Larger images will fall back to the original format.) If the provider returns a different format, Lumenfall automatically converts the image. Emulated Compression quality for lossy formats (`jpeg`, `webp`, `avif`). Range: 1-100, where 100 is highest quality. Emulated A unique identifier representing your end-user. Only used by some providers. Passthrough ## Query parameters If `true`, returns a cost estimate without editing the image. See [Cost estimation](/api-reference/cost-estimation). ## Response Unix timestamp of when the request was created. Actual output dimensions as `"WIDTHxHEIGHT"` (e.g., `"1024x1024"`). Extracted from the generated image. May differ from the requested `size` if the model produced a different resolution. Array of generated image objects. URL of the generated image. Only present if `response_format` is `url`. Base64-encoded image data. Only present if `response_format` is `b64_json`. The prompt that was used to generate the image, if the model revised the original prompt. Metadata about the request execution, including effective cost. See [Billing](/billing#effective-cost-on-responses). Provider display name (e.g., `"Google Vertex AI"`). Provider slug (e.g., `"vertex"`). The provider's request ID, useful for reconciliation. The model string sent in the request. The model that was actually executed, as `"{provider_slug}/{provider_model}"`. Effective cost in the currency specified by `cost_currency`. Currency of the cost (e.g., `"USD"`). ```bash cURL theme={null} curl https://api.lumenfall.ai/openai/v1/images/edits \ -H "Authorization: Bearer $LUMENFALL_API_KEY" \ -F "model=gemini-3-pro-image" \ -F "image=@original.png" \ -F "prompt=Add a red hat to the person" \ -F "size=1024x1024" ``` ```python Python theme={null} from openai import OpenAI client = OpenAI( api_key="your-lumenfall-api-key", base_url="https://api.lumenfall.ai/openai/v1" ) response = client.images.edit( model="gemini-3-pro-image", image=open("original.png", "rb"), prompt="Add a red hat to the person", size="1024x1024" ) print(response.data[0].url) ``` ```typescript TypeScript theme={null} import OpenAI from "openai"; import fs from "fs"; const client = new OpenAI({ apiKey: "your-lumenfall-api-key", baseURL: "https://api.lumenfall.ai/openai/v1", }); const response = await client.images.edit({ model: "gemini-3-pro-image", image: fs.createReadStream("original.png"), prompt: "Add a red hat to the person", size: "1024x1024", }); console.log(response.data[0].url); ``` ```bash cURL (multiple images) theme={null} curl https://api.lumenfall.ai/openai/v1/images/edits \ -H "Authorization: Bearer $LUMENFALL_API_KEY" \ -F "model=gemini-3-pro-image" \ -F "image[]=@lotion.png" \ -F "image[]=@candle.png" \ -F "image[]=@soap.png" \ -F "prompt=Create a gift basket with these items" ``` ```python Python (multiple images) theme={null} from openai import OpenAI client = OpenAI( api_key="your-lumenfall-api-key", base_url="https://api.lumenfall.ai/openai/v1" ) response = client.images.edit( model="gemini-3-pro-image", image=[ open("lotion.png", "rb"), open("candle.png", "rb"), open("soap.png", "rb"), ], prompt="Create a gift basket with these items" ) print(response.data[0].url) ``` ```typescript TypeScript (multiple images) theme={null} import OpenAI from "openai"; import fs from "fs"; const client = new OpenAI({ apiKey: "your-lumenfall-api-key", baseURL: "https://api.lumenfall.ai/openai/v1", }); const response = await client.images.edit({ model: "gemini-3-pro-image", image: [ fs.createReadStream("lotion.png"), fs.createReadStream("candle.png"), fs.createReadStream("soap.png"), ], prompt: "Create a gift basket with these items", }); console.log(response.data[0].url); ``` ```json Response theme={null} { "created": 1702345678, "size": "1024x1024", "data": [ { "url": "https://media.lumenfall.ai/abc123.png", "revised_prompt": "Add a stylish red fedora hat to the person in the image" } ], "metadata": { "model": "gemini-3-pro-image", "executed_model": "vertex/gemini-3-pro-image", "provider": "vertex", "provider_name": "Google Vertex AI", "cost": 0.04, "cost_currency": "USD" } } ``` # Generate images Source: https://docs.lumenfall.ai/api-reference/images/generate POST https://api.lumenfall.ai/openai/v1/images/generations Create images from text prompts Generate images from a text prompt using AI models from various providers. **OpenAI Compatibility** This endpoint implements the [OpenAI Images API](https://platform.openai.com/docs/api-reference/images/create). You can use any [OpenAI SDK](/client-libraries/openai-sdk) by changing the base URL to `https://api.lumenfall.ai/openai/v1`. Lumenfall [normalizes behavior](/unified-model-behavior) across all models - mapping parameters, emulating features, and standardizing errors - so your code works consistently regardless of which provider handles the request. ## Request body You can include additional parameters not listed here. They will be passed through to the underlying provider. Each parameter has a badge showing how Lumenfall handles it across different providers: | Badge | Meaning | | -------------------------- | ---------------------------------------------------------------------------------- | | Passthrough | Passed as-is; some providers may ignore it | | Renamed | Field name is mapped to the provider's expected name | | Converted | Value is transformed to match each provider's format | | Emulated | Works consistently on all models, even if the provider doesn't natively support it | Learn more about [unified model behavior](/unified-model-behavior#parameter-support). A text description of the desired image. Maximum length varies by model. Renamed The model to use for image generation. See [Models](/models). The number of images to generate. Must be between 1 and 10. Some models only support `n=1`. Emulated The size of the generated images. Supported sizes vary by model: * `256x256` * `512x512` * `1024x1024` * `1024x1792` (portrait) * `1792x1024` (landscape) Converted The quality of the image. Options: `standard`, `hd`. Only supported by some models. Passthrough The format of the generated images. Options: * `url` — Returns a URL to the generated image * `b64_json` — Returns the image as base64-encoded JSON Emulated The image file format to generate. Lumenfall supports more formats than OpenAI: * `png` — Lossless compression, supports transparency * `jpeg` — Lossy compression, smaller file size * `gif` — Supports animation and transparency * `webp` — Modern format with good compression * `avif` — Best compression, modern browsers only (Limited to 1,600px on the longest side. Larger images will fall back to the original format.) If the provider returns a different format, Lumenfall automatically converts the image. Emulated Compression quality for lossy formats (`jpeg`, `webp`, `avif`). Range: 1-100, where 100 is highest quality. Emulated The style of the generated images. Options: `vivid`, `natural`. Only supported by some models like DALL-E 3. Passthrough A unique identifier representing your end-user. Only used by some providers. Passthrough ## Query parameters If `true`, returns a cost estimate without generating the image. See [Cost estimation](/api-reference/cost-estimation). ## Response Unix timestamp of when the request was created. Actual output dimensions as `"WIDTHxHEIGHT"` (e.g., `"1024x1024"`). Extracted from the generated image. May differ from the requested `size` if the model produced a different resolution. Array of generated image objects. URL of the generated image. Only present if `response_format` is `url`. Base64-encoded image data. Only present if `response_format` is `b64_json`. The prompt that was used to generate the image, if the model revised the original prompt. Metadata about the request execution, including effective cost. See [Billing](/billing#effective-cost-on-responses). Provider display name (e.g., `"Google Vertex AI"`). Provider slug (e.g., `"vertex"`). The provider's request ID, useful for reconciliation. The model string sent in the request. The model that was actually executed, as `"{provider_slug}/{provider_model}"`. Effective cost in the currency specified by `cost_currency`. Currency of the cost (e.g., `"USD"`). ```bash cURL theme={null} curl https://api.lumenfall.ai/openai/v1/images/generations \ -H "Authorization: Bearer $LUMENFALL_API_KEY" \ -H "Content-Type: application/json" \ -d '{ "model": "gemini-3-pro-image", "prompt": "A futuristic city skyline at sunset", "n": 1, "size": "1024x1024" }' ``` ```python Python theme={null} from openai import OpenAI client = OpenAI( api_key="your-lumenfall-api-key", base_url="https://api.lumenfall.ai/openai/v1" ) response = client.images.generate( model="gemini-3-pro-image", prompt="A futuristic city skyline at sunset", n=1, size="1024x1024" ) print(response.data[0].url) ``` ```typescript TypeScript theme={null} import OpenAI from "openai"; const client = new OpenAI({ apiKey: "your-lumenfall-api-key", baseURL: "https://api.lumenfall.ai/openai/v1", }); const response = await client.images.generate({ model: "gemini-3-pro-image", prompt: "A futuristic city skyline at sunset", n: 1, size: "1024x1024", }); console.log(response.data[0].url); ``` ```json Response theme={null} { "created": 1702345678, "size": "1024x1024", "data": [ { "url": "https://media.lumenfall.ai/abc123.png", "revised_prompt": "A futuristic city skyline at sunset with flying vehicles and neon lights" } ], "metadata": { "model": "gemini-3-pro-image", "executed_model": "vertex/gemini-3-pro-image", "provider": "vertex", "provider_name": "Google Vertex AI", "cost": 0.04, "cost_currency": "USD" } } ``` # API Reference Source: https://docs.lumenfall.ai/api-reference/introduction Complete reference for the Lumenfall API - text, image, and video generation The Lumenfall API is OpenAI-compatible, meaning you can use existing OpenAI SDKs by simply changing the base URL to `https://api.lumenfall.ai/openai/v1`. ## Base URL ``` https://api.lumenfall.ai/openai/v1 ``` ## Authentication All API requests require a Bearer token in the `Authorization` header: ```bash theme={null} Authorization: Bearer lmnfl_your_api_key ``` Get your API key from the [Lumenfall dashboard](https://lumenfall.ai/app). ## Request format All requests should include: * `Content-Type: application/json` header * JSON request body (for POST requests) ## Response format All responses follow the OpenAI response format. Successful responses return JSON with the requested data. Error responses include an `error` object: ```json theme={null} { "error": { "message": "Invalid API key", "type": "authentication_error", "code": "AUTHENTICATION_FAILED" } } ``` ## Error codes | Code | HTTP Status | Description | | ------------------------- | ----------- | ------------------------------------------ | | `AUTHENTICATION_FAILED` | 401 | Invalid or missing API key | | `INSUFFICIENT_BALANCE` | 402 | Account balance too low (prepaid accounts) | | `INVALID_REQUEST` | 400 | Malformed request or invalid parameters | | `MODEL_NOT_FOUND` | 404 | Requested model does not exist | | `MODEL_DISABLED` | 404 | Model is currently disabled | | `RATE_LIMITED` | 429 | Too many requests | | `ALL_PROVIDERS_EXHAUSTED` | 502 | All providers failed to handle the request | | `UPSTREAM_ERROR` | varies | A provider returned an error | ## Rate limits Lumenfall does not impose strict per-request rate limits. Your usage is governed by your account balance - requests are processed as long as you have sufficient credits. For prepaid accounts, ensure you have enough balance or enable [auto top-up](/billing#auto-top-up) to avoid interruptions. ## Dry run mode Add `?dryRun=true` to any request to get a cost estimate without executing it. See [Cost estimation](/api-reference/cost-estimation) for details and response format. ## Response metadata All media generation responses include a `metadata` object with provider routing details and the effective cost of the request. For videos, cost fields update as the job progresses — `cost_estimate` while in progress, `cost` once completed. See [Billing](/billing#effective-cost-on-responses) for details. # List API keys Source: https://docs.lumenfall.ai/api-reference/keys/list GET https://api.lumenfall.ai/v1/keys Retrieve a paginated list of your organization's API keys Returns a paginated list of API keys for your organization, sorted newest first. This endpoint uses the native Lumenfall API path (`/v1/keys`), not the OpenAI-compatible prefix (`/openai/v1`). ## Query parameters Number of results to return per page. Minimum 1, maximum 100. Cursor for forward pagination. Returns results after the key with this ID. Use the `next_page_url` from a previous response instead of constructing this manually. Cursor for backward pagination. Returns results before the key with this ID. Cannot be combined with `starting_after`. Filter keys by status. One of `active` or `revoked`. When omitted, both active and revoked keys are returned. ## Response Always `list`. Array of key objects, sorted newest first. Unique key identifier (e.g., `2m4jLFHhkN1i4VrVHpjJqCOKlPw`). Always `key`. Human-readable name for the key, or `null` if unnamed. Last four characters of the API key for identification. Key status — `active` or `revoked`. When the key was created, as an ISO 8601 datetime. When the key was revoked, as an ISO 8601 datetime. `null` for active keys. URL to fetch the next page of results. `null` when there are no more results. Includes all original filters so you can follow it directly. URL to fetch the previous page of results. `null` when on the first page. ```bash cURL theme={null} curl "https://api.lumenfall.ai/v1/keys" \ -H "Authorization: Bearer $LUMENFALL_API_KEY" ``` ```bash cURL (active only) theme={null} curl "https://api.lumenfall.ai/v1/keys?status=active" \ -H "Authorization: Bearer $LUMENFALL_API_KEY" ``` ```python Python theme={null} import requests response = requests.get( "https://api.lumenfall.ai/v1/keys", headers={"Authorization": "Bearer your-lumenfall-api-key"}, params={"status": "active"}, ) data = response.json() for key in data["data"]: print(f"{key['name']} ({key['last_four']}) — {key['status']}") ``` ```typescript TypeScript theme={null} const response = await fetch("https://api.lumenfall.ai/v1/keys?status=active", { headers: { Authorization: "Bearer your-lumenfall-api-key" }, }); const data = await response.json(); for (const key of data.data) { console.log(`${key.name} (${key.last_four}) — ${key.status}`); } ``` ```json Response theme={null} { "object": "list", "data": [ { "id": "2m4jLFHhkN1i4VrVHpjJqCOKlPw", "object": "key", "name": "Production", "last_four": "a1b2", "status": "active", "created_at": "2026-03-10T09:15:00.000Z", "revoked_at": null }, { "id": "2m4jKEGfmR8h3UqTGnhHpBNkMOx", "object": "key", "name": "Staging", "last_four": "c3d4", "status": "revoked", "created_at": "2026-02-01T12:00:00.000Z", "revoked_at": "2026-03-05T18:30:00.000Z" } ], "next_page_url": null, "previous_page_url": null } ``` # Get model Source: https://docs.lumenfall.ai/api-reference/models/get GET https://api.lumenfall.ai/openai/v1/models/{model} Retrieve details about a specific model Retrieves information about a specific model. ## Path parameters The ID of the model to retrieve (e.g., `gemini-3-pro-image`, `gpt-image-1.5`). ## Response The model identifier. Always `model`. Unix timestamp of when the model was added. The organization that created the model. ```bash cURL theme={null} curl https://api.lumenfall.ai/openai/v1/models/gemini-3-pro-image \ -H "Authorization: Bearer $LUMENFALL_API_KEY" ``` ```python Python theme={null} from openai import OpenAI client = OpenAI( api_key="your-lumenfall-api-key", base_url="https://api.lumenfall.ai/openai/v1" ) model = client.models.retrieve("gemini-3-pro-image") print(model) ``` ```typescript TypeScript theme={null} import OpenAI from "openai"; const client = new OpenAI({ apiKey: "your-lumenfall-api-key", baseURL: "https://api.lumenfall.ai/openai/v1", }); const model = await client.models.retrieve("gemini-3-pro-image"); console.log(model); ``` ```json Response theme={null} { "id": "gemini-3-pro-image", "object": "model", "created": 1702300000, "owned_by": "google" } ``` ## Errors Returned if the model is not found. ```json 404 Not Found theme={null} { "error": { "message": "Model 'invalid-model' not found", "type": "invalid_request_error", "code": "MODEL_NOT_FOUND" } } ``` # List models Source: https://docs.lumenfall.ai/api-reference/models/list GET https://api.lumenfall.ai/openai/v1/models Get a list of all available models Returns a list of all models available through Lumenfall. ## Response Always `list`. Array of model objects. The model identifier (e.g., `gemini-3-pro-image`, `gpt-image-1.5`). Always `model`. Unix timestamp of when the model was added. The organization that created the model. ```bash cURL theme={null} curl https://api.lumenfall.ai/openai/v1/models \ -H "Authorization: Bearer $LUMENFALL_API_KEY" ``` ```python Python theme={null} from openai import OpenAI client = OpenAI( api_key="your-lumenfall-api-key", base_url="https://api.lumenfall.ai/openai/v1" ) models = client.models.list() for model in models.data: print(model.id) ``` ```typescript TypeScript theme={null} import OpenAI from "openai"; const client = new OpenAI({ apiKey: "your-lumenfall-api-key", baseURL: "https://api.lumenfall.ai/openai/v1", }); const models = await client.models.list(); for (const model of models.data) { console.log(model.id); } ``` ```json Response theme={null} { "object": "list", "data": [ { "id": "gemini-3-pro-image", "object": "model", "created": 1702300000, "owned_by": "google" }, { "id": "gpt-image-1.5", "object": "model", "created": 1698785189, "owned_by": "openai" }, { "id": "flux.2-max", "object": "model", "created": 1702300000, "owned_by": "black-forest-labs" }, { "id": "seedream-4.5", "object": "model", "created": 1702300000, "owned_by": "bytedance" }, { "id": "qwen-image", "object": "model", "created": 1702300000, "owned_by": "alibaba" } ] } ``` # List requests Source: https://docs.lumenfall.ai/api-reference/requests/list GET https://api.lumenfall.ai/v1/requests Retrieve a paginated list of your API requests with optional filtering and cost summary Returns a paginated list of API requests for your organization, sorted newest first. This endpoint uses the native Lumenfall API path (`/v1/requests`), not the OpenAI-compatible prefix (`/openai/v1`). ## Query parameters Number of results to return per page. Minimum 1, maximum 100. Cursor for forward pagination. Returns results older than the request with this ID. Use the `next_page_url` from a previous response instead of constructing this manually. Cursor for backward pagination. Returns results newer than the request with this ID. Cannot be combined with `starting_after`. Filter to requests created at or after this time. ISO 8601 datetime (e.g., `2026-03-01T00:00:00Z`). Filter to requests created at or before this time. ISO 8601 datetime (e.g., `2026-03-31T23:59:59Z`). Filter to requests made with a specific API key ID. You can retrieve your key IDs from the [List API keys](/api-reference/keys/list) endpoint. When `true`, includes a `summary` object with the total cost and count of all matching requests (not just the current page). ## Response Always `list`. Array of request objects, sorted newest first. Unique request identifier (e.g., `req_2m4jLFHhkN1i4VrVHpjJqCOKlPw`). Always `request`. When the request started, as an ISO 8601 datetime. The model used (e.g., `flux-pro`, `gpt-image-1.5`). The request modality - `image`, `video`, `speech`, or `vision`. The API endpoint path (e.g., `/openai/v1/images/generations`). Request status - `completed`, `pending`, `processing`, `upstream_failure`, `rejected`, or `cancelled`. Request cost in USD (e.g., `0.045`). Always `usd`. Total response time in milliseconds. The API key ID used for this request. Error code if the request failed (e.g., `ALL_PROVIDERS_EXHAUSTED`). Human-readable error message if the request failed. Caller-provided session ID for grouping related requests. Trace ID extracted from the W3C `traceparent` header. End user identifier from the request body. Size of the request body in bytes. Size of the response body in bytes. URL to fetch the next page of results. `null` when there are no more results. Includes all original filters so you can follow it directly. URL to fetch the previous page of results. `null` when on the first page. Only present when `summary=true`. Aggregates across all matching requests, not just the current page. Total cost of all matching requests in USD. Always `usd`. Total number of matching requests. ```bash cURL theme={null} curl "https://api.lumenfall.ai/v1/requests?limit=5" \ -H "Authorization: Bearer $LUMENFALL_API_KEY" ``` ```bash cURL (with filters) theme={null} curl "https://api.lumenfall.ai/v1/requests?created_after=2026-03-01T00:00:00Z&created_before=2026-03-31T23:59:59Z&summary=true" \ -H "Authorization: Bearer $LUMENFALL_API_KEY" ``` ```python Python theme={null} import requests response = requests.get( "https://api.lumenfall.ai/v1/requests", headers={"Authorization": "Bearer your-lumenfall-api-key"}, params={ "limit": 5, "created_after": "2026-03-01T00:00:00Z", "summary": "true", }, ) data = response.json() for req in data["data"]: print(f"{req['created_at']} {req['model']} ${req['cost']}") # Paginate while data["next_page_url"]: response = requests.get( f"https://api.lumenfall.ai{data['next_page_url']}", headers={"Authorization": "Bearer your-lumenfall-api-key"}, ) data = response.json() for req in data["data"]: print(f"{req['created_at']} {req['model']} ${req['cost']}") ``` ```typescript TypeScript theme={null} const baseUrl = "https://api.lumenfall.ai"; const headers = { Authorization: "Bearer your-lumenfall-api-key" }; let url: string | null = `${baseUrl}/v1/requests?limit=5&summary=true`; while (url) { const response = await fetch(url, { headers }); const data = await response.json(); for (const req of data.data) { console.log(`${req.created_at} ${req.model} $${req.cost}`); } url = data.next_page_url ? `${baseUrl}${data.next_page_url}` : null; } ``` ```json Response theme={null} { "object": "list", "data": [ { "id": "req_2m4jLFHhkN1i4VrVHpjJqCOKlPw", "object": "request", "created_at": "2026-03-15T14:32:10.000Z", "model": "flux-pro", "modality": "image", "endpoint": "/openai/v1/images/generations", "status": "completed", "cost": 0.045, "currency": "usd", "duration_ms": 2340, "key_id": "test_abc123", "error_code": null, "error_message": null, "session_id": null, "trace_id": null, "user": null, "request_size_bytes": 1024, "response_size_bytes": 524288 }, { "id": "req_2m4jKEGfmR8h3UqTGnhHpBNkMOx", "object": "request", "created_at": "2026-03-15T14:30:05.000Z", "model": "gpt-image-1.5", "modality": "image", "endpoint": "/openai/v1/images/generations", "status": "completed", "cost": 0.02, "currency": "usd", "duration_ms": 4120, "key_id": "test_abc123", "error_code": null, "error_message": null, "session_id": null, "trace_id": null, "user": null, "request_size_bytes": 512, "response_size_bytes": 1048576 } ], "next_page_url": "/v1/requests?starting_after=req_2m4jKEGfmR8h3UqTGnhHpBNkMOx&limit=5", "previous_page_url": null } ``` ```json With summary theme={null} { "object": "list", "data": [ { "id": "req_2m4jLFHhkN1i4VrVHpjJqCOKlPw", "object": "request", "created_at": "2026-03-15T14:32:10.000Z", "model": "flux-pro", "modality": "image", "endpoint": "/openai/v1/images/generations", "status": "completed", "cost": 0.045, "currency": "usd", "duration_ms": 2340, "key_id": "test_abc123", "error_code": null, "error_message": null, "session_id": null, "trace_id": null, "user": null, "request_size_bytes": 1024, "response_size_bytes": 524288 } ], "next_page_url": "/v1/requests?starting_after=req_2m4jLFHhkN1i4VrVHpjJqCOKlPw&limit=20&summary=true", "previous_page_url": null, "summary": { "total_cost": 12.34, "currency": "usd", "count": 347 } } ``` # Purge request payloads Source: https://docs.lumenfall.ai/api-reference/requests/purge-payloads DELETE https://api.lumenfall.ai/v1/requests/{request_id}/payloads Delete stored request and response body data and media for a specific request Permanently deletes the stored request body, response body, and any associated media files for a specific request. The request metadata (cost, timing, model, status) is preserved - only the payload data is removed. This operation is idempotent. Calling it on an already-purged request returns a success response with `already_purged` set to `true`. This endpoint uses the native Lumenfall API path (`/v1/requests`), not the OpenAI-compatible prefix (`/openai/v1`). ## Path parameters The ID of the request whose payloads should be purged (e.g., `req_2m4jLFHhkN1i4VrVHpjJqCOKlPw`). ## Response The request ID that was purged. Always `true` on a successful response. `true` if the request was already purged in a previous call. `false` if this call performed the purge. The number of media files deleted from storage. `0` if the request had no stored media or was already purged. ```bash cURL theme={null} curl -X DELETE "https://api.lumenfall.ai/v1/requests/req_2m4jLFHhkN1i4VrVHpjJqCOKlPw/payloads" \ -H "Authorization: Bearer $LUMENFALL_API_KEY" ``` ```python Python theme={null} import requests response = requests.delete( "https://api.lumenfall.ai/v1/requests/req_2m4jLFHhkN1i4VrVHpjJqCOKlPw/payloads", headers={"Authorization": "Bearer your-lumenfall-api-key"}, ) print(response.json()) ``` ```typescript TypeScript theme={null} const response = await fetch( "https://api.lumenfall.ai/v1/requests/req_2m4jLFHhkN1i4VrVHpjJqCOKlPw/payloads", { method: "DELETE", headers: { Authorization: "Bearer your-lumenfall-api-key" }, } ); const data = await response.json(); console.log(data); ``` ```go Go theme={null} req, _ := http.NewRequest( "DELETE", "https://api.lumenfall.ai/v1/requests/req_2m4jLFHhkN1i4VrVHpjJqCOKlPw/payloads", nil, ) req.Header.Set("Authorization", "Bearer your-lumenfall-api-key") resp, _ := http.DefaultClient.Do(req) defer resp.Body.Close() body, _ := io.ReadAll(resp.Body) fmt.Println(string(body)) ``` ```csharp C# / .NET theme={null} using var client = new HttpClient(); client.DefaultRequestHeaders.Add("Authorization", "Bearer your-lumenfall-api-key"); var response = await client.DeleteAsync( "https://api.lumenfall.ai/v1/requests/req_2m4jLFHhkN1i4VrVHpjJqCOKlPw/payloads" ); var json = await response.Content.ReadAsStringAsync(); Console.WriteLine(json); ``` ```java Java theme={null} HttpClient client = HttpClient.newHttpClient(); HttpRequest request = HttpRequest.newBuilder() .uri(URI.create("https://api.lumenfall.ai/v1/requests/req_2m4jLFHhkN1i4VrVHpjJqCOKlPw/payloads")) .header("Authorization", "Bearer your-lumenfall-api-key") .DELETE() .build(); HttpResponse response = client.send(request, HttpResponse.BodyHandlers.ofString()); System.out.println(response.body()); ``` ```ruby Ruby theme={null} require "net/http" require "json" uri = URI("https://api.lumenfall.ai/v1/requests/req_2m4jLFHhkN1i4VrVHpjJqCOKlPw/payloads") req = Net::HTTP::Delete.new(uri) req["Authorization"] = "Bearer your-lumenfall-api-key" response = Net::HTTP.start(uri.hostname, uri.port, use_ssl: true) { |http| http.request(req) } puts JSON.parse(response.body) ``` ```json Response theme={null} { "id": "req_2m4jLFHhkN1i4VrVHpjJqCOKlPw", "purged": true, "already_purged": false, "media_deleted": 2 } ``` ```json Already purged theme={null} { "id": "req_2m4jLFHhkN1i4VrVHpjJqCOKlPw", "purged": true, "already_purged": true, "media_deleted": 0 } ``` # Cancel video Source: https://docs.lumenfall.ai/api-reference/videos/cancel DELETE https://api.lumenfall.ai/openai/v1/videos/{id} Cancel a video generation request Cancel a pending or in-progress video generation request. Cancellation is best-effort - if the video has already completed, this has no effect. ## Path parameters The ID of the video to cancel. ## Response Returns `204 No Content` on success. ```bash cURL theme={null} curl -X DELETE https://api.lumenfall.ai/openai/v1/videos/video_abc123 \ -H "Authorization: Bearer $LUMENFALL_API_KEY" ``` ```python Python theme={null} from openai import OpenAI client = OpenAI( api_key="your-lumenfall-api-key", base_url="https://api.lumenfall.ai/openai/v1" ) client.videos.delete("video_abc123") ``` ```typescript JavaScript / TypeScript theme={null} import OpenAI from "openai"; const client = new OpenAI({ apiKey: "your-lumenfall-api-key", baseURL: "https://api.lumenfall.ai/openai/v1", }); await client.videos.delete("video_abc123"); ``` ```text 204 No Content theme={null} (empty response body) ``` # Generate videos Source: https://docs.lumenfall.ai/api-reference/videos/generate POST https://api.lumenfall.ai/openai/v1/videos Create videos from text or image prompts Generate videos from a text prompt or input image using AI models from various providers. **OpenAI compatibility** This endpoint implements the [OpenAI Videos API](https://platform.openai.com/docs/api-reference/videos/create). You can use any [OpenAI SDK](/client-libraries/openai-sdk) by changing the base URL to `https://api.lumenfall.ai/openai/v1`. Lumenfall [normalizes behavior](/unified-model-behavior) across all models - mapping parameters, emulating features, and standardizing errors - so your code works consistently regardless of which provider handles the request. **Async workflow** Video generation is asynchronous. A successful request returns a `202` response with a video object in `queued` status. Poll `GET /v1/videos/{id}` until the status is `completed` or `failed`. You can also use [webhooks](/webhooks) to receive a notification when the video is ready. **Content types** This endpoint accepts both `application/json` and `multipart/form-data` requests. Use multipart when you want to upload image files directly instead of passing URLs. ## Request body You can include additional parameters not listed here. They will be passed through to the underlying provider. Each parameter has a badge showing how Lumenfall handles it across different providers: | Badge | Meaning | | -------------------------- | ---------------------------------------------------------------------------------- | | Passthrough | Passed as-is; some providers may ignore it | | Renamed | Field name is mapped to the provider's expected name | | Converted | Value is transformed to match each provider's format | | Emulated | Works consistently on all models, even if the provider doesn't natively support it | Learn more about [unified model behavior](/unified-model-behavior#parameter-support). A text description of the desired video. Maximum length varies by model. Renamed The model to use for video generation. See [Models](/models). Duration of the video in seconds. Also accepted as `duration`. Converted The dimensions of the generated video, as `WIDTHxHEIGHT` (e.g., `1920x1080`) or aspect ratio (e.g., `16:9`). Supported sizes vary by model. Converted The number of videos to generate. Must be between 1 and 4. Emulated Aspect ratio for the video (e.g., `16:9`, `9:16`, `1:1`). Lumenfall extension - converted to the provider's native format. Converted Video resolution shorthand: `720p`, `1080p`. Converted to appropriate dimensions per provider. Converted Reference image(s) for image-to-video generation. Not all models support this - check the model's capabilities. Accepts a single object or an array of objects: ```json theme={null} // Single reference {"image_url": "https://example.com/photo.jpg"} // Multiple references [{"image_url": "https://..."}, {"image_url": "https://..."}] ``` `image_url` can be an HTTPS URL or a base64 data URI. The number of references accepted depends on the model (most models support at most 1). When using `multipart/form-data`, send file uploads or URL strings as `input_reference` fields instead. Multiple files are supported via `input_reference`, `input_reference[]`, or `input_reference[N]` field names. Renamed Text describing what to avoid in the video. Maximum 5000 characters. Renamed How long to retain generated media. See [Media retention](/media-retention). URL to receive a webhook notification when the video completes or fails. Deliveries are signed with your organization's webhook secret - retrieve it via [Get webhook secret](/api-reference/webhooks/secret). See [Webhooks](/webhooks) for payload format and verification. A unique key (up to 256 characters) to prevent duplicate requests. If you send the same key twice, the second request returns the existing video instead of creating a new one. Key-value pairs of strings to attach to the video object. A unique identifier representing your end-user. Only used by some providers. Passthrough ## Query parameters If `true`, returns a cost estimate without generating the video. See [Cost estimation](/api-reference/cost-estimation). ## Response Returns a `202 Accepted` response with the video object. Unique identifier for the video. Always `"video"`. Unix timestamp of when the video was created. The generation status. One of `queued`, `in_progress`, `completed`, or `failed`. The model used for generation. The requested duration. Output dimensions as `"WIDTHxHEIGHT"` (e.g., `"1920x1080"`) or aspect ratio (e.g., `"16:9"`). Metadata about the request execution, including cost estimates. See [Billing](/billing#effective-cost-on-responses). Provider display name (e.g., `"Google Vertex AI"`). Provider slug (e.g., `"replicate"`). The provider's job ID, useful for reconciliation. The model string sent in the request. The model that was actually executed, as `"{provider_slug}/{provider_model}"`. Final effective cost. Only present when the job is `completed`. Estimated cost. Present while the job is `queued` or `in_progress`. Currency of the cost (e.g., `"USD"`). ```bash cURL (text-to-video) theme={null} curl -X POST https://api.lumenfall.ai/openai/v1/videos \ -H "Authorization: Bearer $LUMENFALL_API_KEY" \ -H "Content-Type: application/json" \ -d '{ "model": "sora-2", "prompt": "A capybara lounging in a hot spring, steam rising gently, slow camera pan", "seconds": 10, "size": "1920x1080" }' ``` ```bash cURL (image-to-video, JSON) theme={null} curl -X POST https://api.lumenfall.ai/openai/v1/videos \ -H "Authorization: Bearer $LUMENFALL_API_KEY" \ -H "Content-Type: application/json" \ -d '{ "model": "sora-2", "prompt": "The capybara turns its head and blinks slowly", "seconds": 5, "input_reference": { "image_url": "https://example.com/capybara.jpg" } }' ``` ```bash cURL (image-to-video, multipart) theme={null} curl -X POST https://api.lumenfall.ai/openai/v1/videos \ -H "Authorization: Bearer $LUMENFALL_API_KEY" \ -F model=sora-2 \ -F prompt="The capybara turns its head and blinks slowly" \ -F seconds=5 \ -F input_reference=@capybara.jpg ``` ```python Python theme={null} from openai import OpenAI client = OpenAI( api_key="your-lumenfall-api-key", base_url="https://api.lumenfall.ai/openai/v1" ) # Text-to-video video = client.videos.create( model="sora-2", prompt="A capybara lounging in a hot spring, steam rising gently, slow camera pan", seconds=10, size="1920x1080", ) print(video.id) # Use this ID to poll for status ``` ```typescript JavaScript / TypeScript theme={null} import OpenAI from "openai"; const client = new OpenAI({ apiKey: "your-lumenfall-api-key", baseURL: "https://api.lumenfall.ai/openai/v1", }); // Text-to-video const video = await client.videos.create({ model: "sora-2", prompt: "A capybara lounging in a hot spring, steam rising gently, slow camera pan", seconds: 10, size: "1920x1080", }); console.log(video.id); // Use this ID to poll for status ``` ```json 202 Response theme={null} { "id": "video_abc123", "object": "video", "created_at": 1702345678, "status": "queued", "model": "sora-2", "seconds": "10", "size": "1920x1080", "metadata": { "model": "sora-2", "executed_model": "openai/sora-2", "provider": "openai", "provider_name": "OpenAI", "cost_estimate": 0.21, "cost_currency": "USD" } } ``` # Get video Source: https://docs.lumenfall.ai/api-reference/videos/get GET https://api.lumenfall.ai/openai/v1/videos/{id} Check the status of a video generation request Retrieve the current status and output of a video generation request. Use this endpoint to poll for completion after submitting a video with [`POST /v1/videos`](/api-reference/videos/generate). ## Path parameters The ID of the video to retrieve (returned from the generate endpoint). ## Response Unique identifier for the video. Always `"video"`. Unix timestamp of when the video was created. Unix timestamp of when the video finished generating. `null` until completed. Unix timestamp of when the video output URL will expire. `null` until completed. The generation status. One of `queued`, `in_progress`, `completed`, or `failed`. The model used for generation. The prompt used to generate the video. The video duration. Output dimensions as `"WIDTHxHEIGHT"` (e.g., `"1920x1080"`) or aspect ratio (e.g., `"16:9"`). The generated video output. Only present when status is `completed`. URL of the generated video file. This URL expires at `expires_at`. MIME type of the video (e.g., `video/mp4`). Size of the video file in bytes. Error details. Only present when status is `failed`. Error code. Human-readable error message. Metadata about the request execution, including cost. See [Billing](/billing#effective-cost-on-responses). Provider display name (e.g., `"Google Vertex AI"`). Provider slug (e.g., `"replicate"`). The provider's job ID, useful for reconciliation. The model string sent in the request. The model that was actually executed, as `"{provider_slug}/{provider_model}"`. Final effective cost. Only present when the job is `completed`. Estimated cost. Present while the job is `queued` or `in_progress`. Currency of the cost (e.g., `"USD"`). ```bash cURL theme={null} curl https://api.lumenfall.ai/openai/v1/videos/video_abc123 \ -H "Authorization: Bearer $LUMENFALL_API_KEY" ``` ```python Python theme={null} import time from openai import OpenAI client = OpenAI( api_key="your-lumenfall-api-key", base_url="https://api.lumenfall.ai/openai/v1" ) # Poll until the video is ready video_id = "video_abc123" while True: video = client.videos.retrieve(video_id) if video.status == "completed": print(video.output.url) break elif video.status == "failed": print(f"Error: {video.error.message}") break time.sleep(5) ``` ```typescript JavaScript / TypeScript theme={null} import OpenAI from "openai"; const client = new OpenAI({ apiKey: "your-lumenfall-api-key", baseURL: "https://api.lumenfall.ai/openai/v1", }); // Poll until the video is ready const videoId = "video_abc123"; while (true) { const video = await client.videos.retrieve(videoId); if (video.status === "completed") { console.log(video.output.url); break; } else if (video.status === "failed") { console.error(`Error: ${video.error.message}`); break; } await new Promise((r) => setTimeout(r, 5000)); } ``` ```json In progress theme={null} { "id": "video_abc123", "object": "video", "created_at": 1702345678, "completed_at": null, "expires_at": null, "status": "in_progress", "model": "sora-2", "prompt": "A capybara lounging in a hot spring, steam rising gently, slow camera pan", "seconds": "10", "size": "1920x1080", "output": null, "error": null, "metadata": { "model": "sora-2", "executed_model": "openai/sora-2", "provider": "openai", "provider_name": "OpenAI", "cost_estimate": 0.21, "cost_currency": "USD" } } ``` ```json Completed theme={null} { "id": "video_abc123", "object": "video", "created_at": 1702345678, "completed_at": 1702345720, "expires_at": 1702432120, "status": "completed", "model": "sora-2", "prompt": "A capybara lounging in a hot spring, steam rising gently, slow camera pan", "seconds": "10", "size": "1920x1080", "output": { "url": "https://media.lumenfall.ai/video_abc123.mp4", "content_type": "video/mp4", "size_bytes": 15728640 }, "error": null, "metadata": { "model": "sora-2", "executed_model": "openai/sora-2", "provider": "openai", "provider_name": "OpenAI", "cost": 0.21, "cost_currency": "USD" } } ``` # Get webhook secret Source: https://docs.lumenfall.ai/api-reference/webhooks/secret GET https://api.lumenfall.ai/v1/webhooks/secret Retrieve your organization's webhook signing secret Returns the webhook signing secret for your organization. Use this secret to [verify webhook signatures](/webhooks#verifying-signatures). A secret is automatically provisioned the first time you call this endpoint or use a `webhook_url` in a request. This endpoint uses the native Lumenfall API path (`/v1/webhooks/secret`), not the OpenAI-compatible prefix (`/openai/v1`). ## Response Your webhook signing secret, prefixed with `whsec_`. Store this securely - treat it like an API key. ```bash cURL theme={null} curl https://api.lumenfall.ai/v1/webhooks/secret \ -H "Authorization: Bearer $LUMENFALL_API_KEY" ``` ```python Python theme={null} import requests response = requests.get( "https://api.lumenfall.ai/v1/webhooks/secret", headers={"Authorization": "Bearer your-lumenfall-api-key"} ) secret = response.json()["key"] print(secret) # whsec_... ``` ```typescript TypeScript theme={null} const response = await fetch("https://api.lumenfall.ai/v1/webhooks/secret", { headers: { Authorization: "Bearer your-lumenfall-api-key" }, }); const { key } = await response.json(); console.log(key); // whsec_... ``` ```json 200 Response theme={null} { "key": "whsec_dGhpcyBpcyBhbiBleGFtcGxl..." } ``` # Authentication Source: https://docs.lumenfall.ai/authentication Authenticate with the Lumenfall API All requests to the Lumenfall API require authentication using an API key. ## Getting your API key 1. Sign in to your [Lumenfall dashboard](https://lumenfall.ai/app) 2. Navigate to **API Keys** 3. Click **Create API Key** 4. Copy and securely store your key API keys are only shown once when created. Store your key securely - you won't be able to view it again. ## Using your API key Include your API key in the `Authorization` header of every request: ```bash theme={null} Authorization: Bearer lmnfl_abc123.xyz789 ``` ### Example request ```bash theme={null} curl https://api.lumenfall.ai/openai/v1/models \ -H "Authorization: Bearer $LUMENFALL_API_KEY" ``` ### With OpenAI SDK Since Lumenfall is OpenAI-compatible, configure the SDK with your Lumenfall key: ```python Python theme={null} from openai import OpenAI client = OpenAI( api_key="lmnfl_abc123.xyz789", base_url="https://api.lumenfall.ai/openai/v1" ) ``` ```typescript TypeScript theme={null} import OpenAI from "openai"; const client = new OpenAI({ apiKey: "lmnfl_abc123.xyz789", baseURL: "https://api.lumenfall.ai/openai/v1", }); ``` ## Managing API keys From your dashboard, you can: * **Create** new API keys with unique titles * **Revoke** keys that are no longer needed or may be compromised * **Delete** revoked keys to keep your dashboard clean * **View usage** per key to track which applications are consuming your quota ## Security best practices Use environment variables or secret management tools instead of hardcoding keys in your code. ```bash theme={null} # .env file (add to .gitignore) LUMENFALL_API_KEY=lmnfl_abc123.xyz789 ``` API keys should never be included in browser JavaScript, mobile apps, or any code that runs on user devices. Client-side code can be inspected, and exposed keys can be stolen and abused. Instead, proxy requests through your own backend server to keep your API key secure. Alternatively, you can create a separate API key for each of your users so that you can limit their use. Create distinct API keys for development, staging, and production. This makes it easier to rotate keys and track usage. Regularly rotate your API keys, especially for production environments. Revoke old keys after confirming the new ones work. If you suspect a key has been exposed, revoke it immediately from your dashboard and create a new one. # Billing Source: https://docs.lumenfall.ai/billing How pricing and billing works Lumenfall offers transparent, usage-based pricing with two billing modes. ## Billing modes ### Prepaid Add funds to your account and pay as you go. Requests are charged against your balance in close to real-time. * **Balance check:** If your balance becomes low, every request validates sufficient balance. This adds slight latency to the request. The threshold for this behavior varies based on your recent usage. * **Auto top-up:** Optionally configure automatic refills when balance is low. Manage your balance on the [Credits](/credits) page. ### Postpaid (invoice billing) Generate now, pay later. You receive a monthly invoice for your usage. * **Credit limit:** Based on your account history * **Invoicing:** Monthly billing cycle Contact [support@lumenfall.ai](mailto:support@lumenfall.ai) to request invoice billing. ## Pricing Pricing varies by model and is calculated per request. Check the [model catalog](https://lumenfall.ai/models) for current pricing details. Pricing in the model catalog and from the cost estimation endpoint is approximate. Effective pricing is calculated after the request is run, because it depends on outputs. ## Cost estimation Use dry run mode to estimate costs before generating. Add `?dryRun=true` to any request to get a cost estimate without executing it. See the [Cost estimation](/api-reference/cost-estimation) API reference for details and response format. ## Effective cost on responses Every response that delivers media includes a `metadata` object with the effective cost of the request. Because final pricing depends on outputs (resolution, duration, format), this is the authoritative cost — not an estimate. ### Images The image generation response includes `metadata.cost` with the effective cost: ```json theme={null} { "created": 1702345678, "data": [{ "url": "https://media.lumenfall.ai/abc123.png" }], "metadata": { "model": "gemini-3-pro-image", "executed_model": "vertex/gemini-3-pro-image", "cost": 0.04, "cost_currency": "USD" } } ``` ### Videos Video generation is asynchronous. Cost fields on `metadata` change depending on the job status: * **In progress:** `cost_estimate` — an estimated cost based on the request parameters. * **Completed:** `cost` — the final effective cost. ```json theme={null} { "id": "video_abc123", "status": "completed", "output": { "url": "https://media.lumenfall.ai/video_abc123.mp4" }, "metadata": { "model": "sora-2", "executed_model": "openai/sora-2", "cost": 0.21, "cost_currency": "USD" } } ``` The `metadata` field also includes provider routing details like `provider`, `provider_name`, `executed_model`, and `upstream_id`. See the API reference for the full schema. ## Insufficient balance errors If your prepaid balance is too low, requests return a `402` error: ```json theme={null} { "error": { "message": "Insufficient balance", "type": "billing_error", "code": "INSUFFICIENT_BALANCE" } } ``` To resolve: 1. [Add funds](/credits#add-funds) to your account 2. [Enable auto top-up](/credits#auto-top-up) to prevent future interruptions 3. Consider switching to postpaid billing for uninterrupted service # Changelog Source: https://docs.lumenfall.ai/changelog Product updates and announcements ## Chrome extension Generate and edit images directly in your browser with the [Lumenfall AI Image Studio](https://chromewebstore.google.com/detail/lumenfall-ai-image-studio/ebmnoplgfdpoidaaoombkjcmfbifapdj) Chrome extension. Open the side panel on any page to create images from text prompts, right-click any image on the web to edit it with Lumenfall, or extract page text to brainstorm contextual illustrations. All generated images are saved locally in a built-in gallery. The extension is [open source](https://github.com/lumenfall-ai/lumenfall-chrome-extension) — clone it and build your own tools on top of the Lumenfall API. See the [Chrome Extension integration guide](/integrations/chrome-extension) for setup and usage. ## Chat completions endpoint You can now make LLM calls through Lumenfall using the [chat completions](/api-reference/chat/completions) endpoint — no need to sign up for a separate text provider or switch base URLs in your SDK. Modern media apps don't just generate images; they also need LLM calls for prompting, captioning, moderation, and orchestration. Now you can do it all from one platform. Text completions are powered by [OpenRouter](https://openrouter.ai), so all hundreds of their models are available and all OpenRouter features are fully supported. Use models from OpenAI, Google, Anthropic, Meta, Mistral, and many more providers. ## Text-to-vector support Lumenfall now supports text-to-vector (SVG) models. Generate scalable vector graphics through the same OpenAI-compatible API you already use for raster images. Models like [Recraft V4 SVG](https://lumenfall.ai/models/recraft-ai/recraft-v4-svg) can now return SVG output natively. ## Playground upgrade The playground received a major upgrade with new generation controls, model management, and UI improvements. * **Generation controls** — configure resolution (Standard / HD 2K / Ultra 4K), style, seed, and output format directly from the playground. The aspect ratio picker now honors defaults from loaded examples. * **Model sets and inline editing** — curated preset model groups (e.g. "Top Models", "Speed Round") drive default panel selections, and clicking a panel's model name opens a searchable inline picker to swap models on the fly. * **UX improvements** — better drag-and-drop with paste support and input image previews, responsive panel actions that collapse into a compact menu at narrow widths, and fixes for image downloads and clipboard copy. ## xAI provider xAI is now available as a provider, bringing Grok Imagine models to Lumenfall. See [providers](/providers) for the full list. ## Arena and Google Gemini API ### Image arena Compare image generation models side-by-side with blind voting. See results on the community leaderboard. Try it at [lumenfall.ai/arena](https://lumenfall.ai/arena) and [lumenfall.ai/leaderboard](https://lumenfall.ai/leaderboard). ### Google Gemini API provider We now support Google's [Gemini API](https://ai.google.dev) as a provider, in addition to Vertex AI. This gives you more routing options for Google models like Gemini and Imagen. See [providers](/providers) for the full list. ## llms.py extension Use Lumenfall image models inside [llms.py](https://llmspy.org/) - both the CLI and the web UI. Generate and edit images with any Lumenfall model in a single command. See the [llms.py integration guide](/integrations/llmspy) for setup and usage. ## General availability Lumenfall is now generally available. Access all top image and video generation models through a single, OpenAI-compatible API — including models from the Artificial Analysis [Text to Image](https://artificialanalysis.ai/image/leaderboard/text-to-image) and [Image Editing](https://artificialanalysis.ai/image/leaderboard/editing) leaderboards. Each model is backed by multiple providers with automatic failover and zero markup on model costs. Model features like requesting a specific output size or format are normalized across models and providers, so they always just work. ### OpenAI-compatible API * Drop-in replacement for OpenAI's [image generation](/api-reference/images/generate) and [editing](/api-reference/images/edit) endpoints * Works with [OpenAI SDKs](/client-libraries/openai-sdk) in Python, TypeScript, Go, C#, Java, and Ruby * Also supports [Vercel AI SDK](/client-libraries/vercel-ai-sdk), [LiteLLM](/client-libraries/litellm), and [RubyLLM](/client-libraries/rubyllm) ### Multi-provider routing * Seven [providers](/providers): Google Vertex AI, Google Gemini API, OpenAI, Replicate, fal.ai, Fireworks AI, and Runware * [Automatic failover](/routing) when a provider doesn't respond or returns an error * Force a specific provider by prefixing the model slug (e.g., `vertex/gemini-3-pro-image-preview`) ### Unified model behavior * [Feature emulation](/unified-model-behavior) ensures all models behave consistently * Request any size (e.g., "1920x1080") and we map it to each provider's and model's expected format * Output format conversion (PNG, JPEG, WebP, AVIF) across all providers * Get responses as URL or base64 (`response_format`), regardless of provider support * Standardized error responses following OpenAI conventions ### Dashboard * View all [requests](/requests) including error responses * View available [models](/models) with pricing ### Billing * [Usage-based pricing](/billing) with no subscriptions or commitments * Prepaid credits with optional auto top-up * [Dry run mode](/api-reference/cost-estimation) for cost estimation before generating # HTTP / cURL Source: https://docs.lumenfall.ai/client-libraries/http Make direct HTTP requests to the Lumenfall API You can integrate Lumenfall with any programming language or tool that supports HTTP requests. This guide shows how to use the API directly without an SDK. ## Base URL All API requests should be made to: ``` https://api.lumenfall.ai/openai/v1 ``` ## Authentication Include your API key in the `Authorization` header: ``` Authorization: Bearer lmnfl_your_api_key ``` ## Generate images Make a POST request to `/images/generations`: ```bash cURL theme={null} curl https://api.lumenfall.ai/openai/v1/images/generations \ -H "Authorization: Bearer $LUMENFALL_API_KEY" \ -H "Content-Type: application/json" \ -d '{ "model": "gemini-3-pro-image", "prompt": "A serene mountain landscape at sunset with dramatic clouds", "n": 1, "size": "1024x1024" }' ``` ```python Python (requests) theme={null} import requests import os response = requests.post( "https://api.lumenfall.ai/openai/v1/images/generations", headers={ "Authorization": f"Bearer {os.environ['LUMENFALL_API_KEY']}", "Content-Type": "application/json" }, json={ "model": "gemini-3-pro-image", "prompt": "A serene mountain landscape at sunset with dramatic clouds", "n": 1, "size": "1024x1024" } ) data = response.json() print(data["data"][0]["url"]) ``` ```javascript JavaScript (fetch) theme={null} const response = await fetch( "https://api.lumenfall.ai/openai/v1/images/generations", { method: "POST", headers: { Authorization: `Bearer ${process.env.LUMENFALL_API_KEY}`, "Content-Type": "application/json", }, body: JSON.stringify({ model: "gemini-3-pro-image", prompt: "A serene mountain landscape at sunset with dramatic clouds", n: 1, size: "1024x1024", }), } ); const data = await response.json(); console.log(data.data[0].url); ``` ```go Go theme={null} package main import ( "bytes" "encoding/json" "fmt" "net/http" "os" ) func main() { payload := map[string]interface{}{ "model": "gemini-3-pro-image", "prompt": "A serene mountain landscape at sunset with dramatic clouds", "n": 1, "size": "1024x1024", } body, _ := json.Marshal(payload) req, _ := http.NewRequest( "POST", "https://api.lumenfall.ai/openai/v1/images/generations", bytes.NewBuffer(body), ) req.Header.Set("Authorization", "Bearer "+os.Getenv("LUMENFALL_API_KEY")) req.Header.Set("Content-Type", "application/json") client := &http.Client{} resp, _ := client.Do(req) defer resp.Body.Close() var result map[string]interface{} json.NewDecoder(resp.Body).Decode(&result) data := result["data"].([]interface{}) first := data[0].(map[string]interface{}) fmt.Println(first["url"]) } ``` ```php PHP theme={null} true, CURLOPT_POST => true, CURLOPT_HTTPHEADER => [ 'Authorization: Bearer ' . $apiKey, 'Content-Type: application/json' ], CURLOPT_POSTFIELDS => json_encode([ 'model' => 'gemini-3-pro-image', 'prompt' => 'A serene mountain landscape at sunset with dramatic clouds', 'n' => 1, 'size' => '1024x1024' ]) ]); $response = curl_exec($ch); curl_close($ch); $data = json_decode($response, true); echo $data['data'][0]['url']; ``` ### Response ```json theme={null} { "created": 1702345678, "data": [ { "url": "https://media.lumenfall.ai/abc123.png", "revised_prompt": "A serene mountain landscape at sunset with dramatic clouds and golden light" } ] } ``` ### Request parameters | Parameter | Type | Default | Description | | ----------------- | ------- | ----------- | -------------------------------------------------------------------- | | `model` | string | required | Model ID (e.g., `gemini-3-pro-image`, `gpt-image-1.5`, `flux.2-max`) | | `prompt` | string | required | Text description of the desired image | | `n` | integer | `1` | Number of images to generate (1-10) | | `size` | string | `1024x1024` | Image dimensions | | `quality` | string | `standard` | Image quality (`standard` or `hd`) | | `response_format` | string | `url` | Response format (`url` or `b64_json`) | | `style` | string | `vivid` | Image style (`vivid` or `natural`) | ### Get base64 response ```bash theme={null} curl https://api.lumenfall.ai/openai/v1/images/generations \ -H "Authorization: Bearer $LUMENFALL_API_KEY" \ -H "Content-Type: application/json" \ -d '{ "model": "gpt-image-1.5", "prompt": "A cute robot", "response_format": "b64_json" }' ``` ## Passing additional parameters Include any additional parameters in the JSON request body: ```bash theme={null} curl https://api.lumenfall.ai/openai/v1/images/generations \ -H "Authorization: Bearer $LUMENFALL_API_KEY" \ -H "Content-Type: application/json" \ -d '{ "model": "gemini-3-pro-image", "prompt": "A capybara relaxing in a hot spring", "size": "1024x1024", "seed": 12345, "custom_provider_param": "value" }' ``` Additional parameters are passed directly to the upstream provider. Check the provider's documentation for supported parameters. ## Edit images Make a POST request to `/images/edits` with multipart form data: ```bash cURL theme={null} curl https://api.lumenfall.ai/openai/v1/images/edits \ -H "Authorization: Bearer $LUMENFALL_API_KEY" \ -F "image=@original.png" \ -F "prompt=Add a rainbow in the sky" \ -F "model=gpt-image-1.5" \ -F "size=1024x1024" ``` ```python Python (requests) theme={null} import requests import os response = requests.post( "https://api.lumenfall.ai/openai/v1/images/edits", headers={ "Authorization": f"Bearer {os.environ['LUMENFALL_API_KEY']}" }, files={ "image": open("original.png", "rb") }, data={ "prompt": "Add a rainbow in the sky", "model": "gpt-image-1.5", "size": "1024x1024" } ) data = response.json() print(data["data"][0]["url"]) ``` ```javascript JavaScript (fetch) theme={null} const formData = new FormData(); formData.append("image", await fs.openAsBlob("original.png")); formData.append("prompt", "Add a rainbow in the sky"); formData.append("model", "gpt-image-1.5"); formData.append("size", "1024x1024"); const response = await fetch( "https://api.lumenfall.ai/openai/v1/images/edits", { method: "POST", headers: { Authorization: `Bearer ${process.env.LUMENFALL_API_KEY}`, }, body: formData, } ); const data = await response.json(); console.log(data.data[0].url); ``` ### With a mask Provide a mask to specify which areas should be edited: ```bash theme={null} curl https://api.lumenfall.ai/openai/v1/images/edits \ -H "Authorization: Bearer $LUMENFALL_API_KEY" \ -F "image=@original.png" \ -F "mask=@mask.png" \ -F "prompt=A sunlit indoor lounge area with a pool" \ -F "model=gpt-image-1.5" \ -F "size=1024x1024" ``` ## List models Get all available models: ```bash theme={null} curl https://api.lumenfall.ai/openai/v1/models \ -H "Authorization: Bearer $LUMENFALL_API_KEY" ``` Response: ```json theme={null} { "object": "list", "data": [ { "id": "gemini-3-pro-image", "object": "model", "created": 1702345678, "owned_by": "lumenfall" }, { "id": "gpt-image-1.5", "object": "model", "created": 1702345678, "owned_by": "lumenfall" } ] } ``` ## Get a specific model ```bash theme={null} curl https://api.lumenfall.ai/openai/v1/models/gemini-3-pro-image \ -H "Authorization: Bearer $LUMENFALL_API_KEY" ``` ## Estimate costs (dry run) Add `?dryRun=true` to get a cost estimate without generating: ```bash theme={null} curl "https://api.lumenfall.ai/openai/v1/images/generations?dryRun=true" \ -H "Authorization: Bearer $LUMENFALL_API_KEY" \ -H "Content-Type: application/json" \ -d '{ "model": "gemini-3-pro-image", "prompt": "A test prompt", "size": "1024x1024" }' ``` Response: ```json theme={null} { "estimated": true, "model": "gemini-3-pro-image", "provider": "vertex", "total_cost_micros": 40000, "currency": "USD" } ``` ## Response headers Lumenfall includes useful headers in every response: | Header | Description | | ------------------------ | --------------------------------------- | | `X-Lumenfall-Provider` | The provider that handled the request | | `X-Lumenfall-Model` | The model that was used | | `X-Lumenfall-Request-Id` | Unique request identifier for debugging | ## Error handling ### Error response format ```json theme={null} { "error": { "message": "Invalid API key", "type": "authentication_error", "code": "AUTHENTICATION_FAILED" } } ``` ### HTTP status codes | Status | Code | Description | | ------ | ------------------------- | ----------------------------- | | 400 | `INVALID_REQUEST` | Invalid request parameters | | 401 | `AUTHENTICATION_FAILED` | Invalid or missing API key | | 402 | `INSUFFICIENT_BALANCE` | Account balance too low | | 404 | `MODEL_NOT_FOUND` | Requested model doesn't exist | | 429 | `RATE_LIMITED` | Too many requests | | 502 | `ALL_PROVIDERS_EXHAUSTED` | All providers failed | ## Next steps Explore the full API documentation. Learn more about API key management. # LiteLLM Source: https://docs.lumenfall.ai/client-libraries/litellm Use LiteLLM with Lumenfall [LiteLLM](https://github.com/BerriAI/litellm) is a popular Python library that provides a unified interface to 100+ LLM providers. You can use LiteLLM with Lumenfall to access many more media models out of the box than LiteLLM supports through other providers. LiteLLM can be used in two ways: * **Python SDK:** Your code connects directly to the Lumenfall API. Recommended if you just want to use media models, as Lumenfall unifies all models and providers to correctly work with the LiteLLM SDK. * **Proxy server:** You can use any OpenAI-compatible SDK to connect to the LiteLLM proxy. The proxy then calls the Lumenfall API. This mode makes sense if you are already using the proxy or want to use other providers alongside Lumenfall, especially for text models. Both modes are covered below. ## Connecting Lumenfall to LiteLLM LiteLLM uses a **provider prefix** on the model name (e.g., `openai/gpt-image-1.5`, `vertex_ai/gemini-2.5-flash-image`) to determine which backend to route a request to. Lumenfall is not a built-in LiteLLM provider, so we need to add a prefix to make sure requests are routed to Lumenfall. There are three integration approaches: | Approach | Model format | Status | | ---------------------------- | -------------------------------- | --------------------------------------------------------------------------------------------- | | `openai/` prefix | `openai/gemini-3-pro-image` | **Recommended** - works for generation and editing ([minor limitations](#openai-recommended)) | | `hosted_vllm/` prefix | `hosted_vllm/gemini-3-pro-image` | Generation only - no `image_edit` support | | Custom `lumenfall/` provider | `lumenfall/gemini-3-pro-image` | Not yet working - [upstream bugs](#custom-lumenfall-provider) | The LiteLLM provider prefix (e.g., `openai/`, `hosted_vllm/`) is not the same as a [Lumenfall provider](/providers) prefix (e.g., `fal/`, `replicate/`). The LiteLLM prefix tells LiteLLM which *client* to use for the request. The Lumenfall provider prefix tells Lumenfall which *backend provider* to route to. They are independent - for example, `openai/fal/flux.2-max` uses the LiteLLM `openai/` client to send a request to Lumenfall, which then routes to fal.ai. This guide uses the `openai/` prefix throughout. See [Provider prefix options](#provider-prefix-options) at the end for details on the alternatives. ## Installation ```bash theme={null} pip install litellm ``` ## Configuration Set your Lumenfall credentials once so you don't need to pass them on every call: ```python theme={null} import litellm litellm.api_base = "https://api.lumenfall.ai/openai/v1" litellm.api_key = "lmnfl_your_api_key" ``` Alternatively, set environment variables before running your script: ```bash theme={null} export OPENAI_API_BASE="https://api.lumenfall.ai/openai/v1" export OPENAI_API_KEY="lmnfl_your_api_key" ``` If using the `hosted_vllm/` prefix, set `HOSTED_VLLM_API_BASE` and `HOSTED_VLLM_API_KEY` instead. ## Python SDK ### Generate images Use `litellm.image_generation()` to create images: ```python theme={null} import litellm response = litellm.image_generation( model="openai/gemini-3-pro-image", prompt="A capybara lounging in a hot spring at sunset", n=1, size="1024x1024" ) image_url = response.data[0].url print(image_url) ``` Lumenfall handles parameter transformation on the backend. When you request `size="1024x1024"` for a model that uses different dimensions or even only supports different parameters such as aspect ratio, Lumenfall automatically maps it to the closest supported size. #### Generation options Use `extra_body` to pass additional parameters that aren't part of LiteLLM's function signature directly to the Lumenfall API. ```python theme={null} response = litellm.image_generation( model="openai/gpt-image-1.5", prompt="A capybara wearing a tiny top hat in a garden", n=1, size="1792x1024", # landscape orientation extra_body={"response_format": "b64_json"} # use extra_body for response_format ) ``` With the `openai/` prefix, `response_format` and `style` must go through `extra_body` because LiteLLM validates parameters against its built-in OpenAI model configs, which don't include these for non-OpenAI models. Standard parameters like `n` and `size` can be passed directly. ### Edit images Use `litellm.image_edit()` to edit existing images: ```python theme={null} import litellm response = litellm.image_edit( model="openai/gpt-image-1.5", image=open("original.png", "rb"), prompt="Add a capybara sitting in the foreground", n=1, size="1024x1024" ) edited_url = response.data[0].url print(edited_url) ``` ### Async support Use `aimage_generation()` and `aimage_edit()` for async operations: ```python theme={null} import asyncio import litellm async def generate_and_edit(): # Generate an image gen_response = await litellm.aimage_generation( model="openai/gemini-3-pro-image", prompt="A capybara astronaut floating in space" ) # Edit an existing image with open("original.png", "rb") as img: edit_response = await litellm.aimage_edit( model="openai/gpt-image-1.5", image=img, prompt="Make it look like a watercolor painting" ) return gen_response.data[0].url, edit_response.data[0].url gen_url, edit_url = asyncio.run(generate_and_edit()) ``` ### Forcing a Lumenfall provider Lumenfall routes requests to underlying providers like OpenAI, Vertex AI, Replicate, and fal.ai. By default, Lumenfall selects the best provider automatically. You can force a specific provider by prefixing the model name with the provider slug: ```python theme={null} response = litellm.image_generation( model="openai/fal/flux.2-max", # force fal.ai provider prompt="A capybara swimming in a jungle river" ) ``` See [Providers](/providers) for the full list of provider slugs and how routing works. ### Per-call configuration You can pass `api_key` and `api_base` on individual calls instead of using environment variables: ```python theme={null} response = litellm.image_generation( model="openai/gemini-3-pro-image", prompt="A capybara in a field of sunflowers", api_key="lmnfl_your_api_key", api_base="https://api.lumenfall.ai/openai/v1" ) ``` This is useful when you need different credentials for different calls, or when you want explicit configuration without environment variables. ## Proxy server The LiteLLM Proxy is a server that exposes an OpenAI-compatible API. You configure Lumenfall models once in a YAML file, then any OpenAI client can use them without knowing about provider prefixes or base URLs. Add this to your `litellm_config.yaml`: ```yaml theme={null} model_list: - model_name: lumenfall-gemini-image litellm_params: model: openai/gemini-3-pro-image api_key: os.environ/LUMENFALL_API_KEY api_base: https://api.lumenfall.ai/openai/v1 - model_name: lumenfall-gpt-image litellm_params: model: openai/gpt-image-1.5 api_key: os.environ/LUMENFALL_API_KEY api_base: https://api.lumenfall.ai/openai/v1 - model_name: lumenfall-flux litellm_params: model: openai/flux.2-max api_key: os.environ/LUMENFALL_API_KEY api_base: https://api.lumenfall.ai/openai/v1 ``` The `os.environ/LUMENFALL_API_KEY` syntax tells the proxy to read the API key from an environment variable at runtime, so you don't hardcode secrets in the config file. Start the proxy: ```bash theme={null} litellm --config litellm_config.yaml ``` Then make requests through it using any OpenAI-compatible SDK. Here's an example using the OpenAI Python SDK: ```python theme={null} from openai import OpenAI client = OpenAI( base_url="http://localhost:4000", api_key="any" # This is your key to authenticate against the LiteLLM proxy. NOT your Lumenfall API Key. ) response = client.images.generate( model="lumenfall-gemini-image", prompt="A capybara napping by a lake at dawn" ) ``` This works with any language or SDK that supports the OpenAI API - the proxy handles the Lumenfall routing internally. ## Provider prefix options This section provides more detail on the trade-offs of each approach. ### openai/ (recommended) The `openai/` prefix works for both image generation and editing. LiteLLM validates parameters against its OpenAI image configs, which blocks `response_format` and `style` for models that aren't `dall-e-2`, `dall-e-3`, or `gpt-image-*`. You can work around this by passing blocked parameters via `extra_body`: ```python theme={null} response = litellm.image_generation( model="openai/gpt-image-1.5", prompt="A capybara reading a book under a tree", extra_body={"response_format": "b64_json"} # bypass param validation ) ``` This workaround is not needed for `image_edit`, where `response_format` is supported directly. ### hosted\_vllm/ (generation only) The `hosted_vllm/` prefix provides full parameter passthrough with no validation - useful if you need `response_format` or `style` without the `extra_body` workaround. ```python theme={null} import os os.environ["HOSTED_VLLM_API_BASE"] = "https://api.lumenfall.ai/openai/v1" os.environ["HOSTED_VLLM_API_KEY"] = "lmnfl_your_api_key" response = litellm.image_generation( model="hosted_vllm/gemini-3-pro-image", prompt="A capybara riding a bicycle through a park", response_format="b64_json" # works directly, no extra_body needed ) ``` **Limitation:** LiteLLM's `hosted_vllm` provider does not support the `image_edit` endpoint. If you need image editing, use `openai/` instead. ### Custom lumenfall/ provider LiteLLM supports [registering custom providers via a JSON file](https://docs.litellm.ai/docs/providers/openai_compatible#quick-start---add-a-json), which would allow a cleaner `lumenfall/model-name` syntax without `api_base` on every call. This would be the ideal approach, but it currently has upstream bugs for image endpoints: * **Image generation** returns empty results (`data=[]`) despite the request succeeding upstream ([related: #18961](https://github.com/BerriAI/litellm/issues/18961), [#6513](https://github.com/BerriAI/litellm/issues/6513)) * **Image editing** fails with a provider validation error ([related: #7575](https://github.com/BerriAI/litellm/issues/7575)) These stem from [PR #18291](https://github.com/BerriAI/litellm/pull/18291) only wiring up JSON provider routing for chat completions, not image endpoints. Once resolved upstream, this would be the preferred approach. We will update this guide when that happens. ## Next steps Explore the full API documentation. See all available image generation models. # OpenAI SDK Source: https://docs.lumenfall.ai/client-libraries/openai-sdk Use the official OpenAI SDKs with Lumenfall Lumenfall is fully compatible with all official OpenAI SDKs. Since Lumenfall implements the OpenAI API specification, you can use any official SDK by simply changing the base URL and API key. **Official SDKs:** * [openai-python](https://github.com/openai/openai-python) - Python * [openai-node](https://github.com/openai/openai-node) - JavaScript / TypeScript * [openai-go](https://github.com/openai/openai-go) - Go * [openai-dotnet](https://github.com/openai/openai-dotnet) - C# / .NET * [openai-java](https://github.com/openai/openai-java) - Java * [openai-ruby](https://github.com/openai/openai-ruby) - Ruby ## Installation ```bash Python theme={null} pip install openai ``` ```bash JavaScript / TypeScript theme={null} npm install openai ``` ```bash Go theme={null} go get github.com/openai/openai-go/v3 ``` ```bash C# / .NET theme={null} dotnet add package OpenAI ``` ```xml Java (Maven) theme={null} com.openai openai-java 0.20.0 ``` ```bash Ruby theme={null} gem install openai ``` ## Configuration ```python Python theme={null} from openai import OpenAI client = OpenAI( api_key="lmnfl_your_api_key", base_url="https://api.lumenfall.ai/openai/v1" ) ``` ```typescript JavaScript / TypeScript theme={null} import OpenAI from "openai"; const client = new OpenAI({ apiKey: "lmnfl_your_api_key", baseURL: "https://api.lumenfall.ai/openai/v1", }); ``` ```go Go theme={null} package main import ( "github.com/openai/openai-go/v3" "github.com/openai/openai-go/v3/option" ) func main() { client := openai.NewClient( option.WithAPIKey("lmnfl_your_api_key"), option.WithBaseURL("https://api.lumenfall.ai/openai/v1"), ) } ``` ```csharp C# / .NET theme={null} using OpenAI; using OpenAI.Images; var options = new OpenAIClientOptions { Endpoint = new Uri("https://api.lumenfall.ai/openai/v1") }; var client = new OpenAIClient("lmnfl_your_api_key", options); var imageClient = client.GetImageClient("gemini-3-pro-image"); ``` ```java Java theme={null} import com.openai.client.OpenAIClient; import com.openai.client.okhttp.OpenAIOkHttpClient; OpenAIClient client = OpenAIOkHttpClient.builder() .apiKey("lmnfl_your_api_key") .baseUrl("https://api.lumenfall.ai/openai/v1") .build(); ``` ```ruby Ruby theme={null} require "openai" client = OpenAI::Client.new( api_key: "lmnfl_your_api_key", base_url: "https://api.lumenfall.ai/openai/v1" ) ``` ## Chat completions ```python Python theme={null} response = client.chat.completions.create( model="google/gemini-3-flash-preview", messages=[ {"role": "user", "content": "Why are capybaras so chill?"} ] ) print(response.choices[0].message.content) ``` ```typescript JavaScript / TypeScript theme={null} const response = await client.chat.completions.create({ model: "google/gemini-3-flash-preview", messages: [ { role: "user", content: "Why are capybaras so chill?" }, ], }); console.log(response.choices[0].message.content); ``` ```go Go theme={null} response, err := client.Chat.Completions.New(context.Background(), openai.ChatCompletionNewParams{ Model: openai.F("google/gemini-3-flash-preview"), Messages: openai.F([]openai.ChatCompletionMessageParamUnion{ openai.UserMessage("Why are capybaras so chill?"), }), }) if err != nil { panic(err) } fmt.Println(response.Choices[0].Message.Content) ``` ```csharp C# / .NET theme={null} var chatClient = client.GetChatClient("google/gemini-3-flash-preview"); ChatCompletion response = await chatClient.CompleteChatAsync( [new UserChatMessage("Why are capybaras so chill?")] ); Console.WriteLine(response.Content[0].Text); ``` ```java Java theme={null} var params = ChatCompletionCreateParams.builder() .model("google/gemini-3-flash-preview") .addMessage(ChatCompletionUserMessageParam.builder() .content("Why are capybaras so chill?") .build()) .build(); var response = client.chat().completions().create(params); System.out.println(response.choices().get(0).message().content().orElse(null)); ``` ```ruby Ruby theme={null} response = client.chat.completions.create( model: "google/gemini-3-flash-preview", messages: [ { role: "user", content: "Why are capybaras so chill?" } ] ) puts response.choices.first.message.content ``` ### Streaming ```python Python theme={null} stream = client.chat.completions.create( model="google/gemini-3-flash-preview", messages=[ {"role": "user", "content": "Tell me a fun fact about capybaras"} ], stream=True ) for chunk in stream: if chunk.choices[0].delta.content: print(chunk.choices[0].delta.content, end="") ``` ```typescript JavaScript / TypeScript theme={null} const stream = await client.chat.completions.create({ model: "google/gemini-3-flash-preview", messages: [ { role: "user", content: "Tell me a fun fact about capybaras" }, ], stream: true, }); for await (const chunk of stream) { const content = chunk.choices[0]?.delta?.content; if (content) process.stdout.write(content); } ``` ## Generate images ```python Python theme={null} response = client.images.generate( model="gemini-3-pro-image", prompt="A serene mountain landscape at sunset with dramatic clouds", n=1, size="1024x1024" ) print(response.data[0].url) ``` ```typescript JavaScript / TypeScript theme={null} const response = await client.images.generate({ model: "gemini-3-pro-image", prompt: "A serene mountain landscape at sunset with dramatic clouds", n: 1, size: "1024x1024", }); console.log(response.data[0].url); ``` ```go Go theme={null} response, err := client.Images.Generate(context.Background(), openai.ImageGenerateParams{ Model: openai.F("gemini-3-pro-image"), Prompt: openai.F("A serene mountain landscape at sunset with dramatic clouds"), N: openai.Int(1), Size: openai.F(openai.ImageGenerateParamsSize1024x1024), }) if err != nil { panic(err) } fmt.Println(response.Data[0].URL) ``` ```csharp C# / .NET theme={null} GeneratedImage image = await imageClient.GenerateImageAsync( "A serene mountain landscape at sunset with dramatic clouds", new ImageGenerationOptions { Size = GeneratedImageSize.W1024xH1024, Quality = GeneratedImageQuality.Standard } ); Console.WriteLine(image.ImageUri); ``` ```java Java theme={null} ImagesResponse response = client.images().generate(ImageGenerateParams.builder() .model("gemini-3-pro-image") .prompt("A serene mountain landscape at sunset with dramatic clouds") .n(1) .size(ImageGenerateParams.Size._1024X1024) .build()); System.out.println(response.data().get(0).url().orElse(null)); ``` ```ruby Ruby theme={null} response = client.images.generate( model: "gemini-3-pro-image", prompt: "A serene mountain landscape at sunset with dramatic clouds", size: "1024x1024" ) puts response.data.first.url ``` ## Edit images ```python Python theme={null} response = client.images.edit( model="gpt-image-1.5", image=open("original.png", "rb"), prompt="Add a rainbow in the sky", n=1, size="1024x1024" ) print(response.data[0].url) ``` ```typescript JavaScript / TypeScript theme={null} import fs from "fs"; const response = await client.images.edit({ model: "gpt-image-1.5", image: fs.createReadStream("original.png"), prompt: "Add a rainbow in the sky", n: 1, size: "1024x1024", }); console.log(response.data[0].url); ``` ```go Go theme={null} imageFile, _ := os.Open("original.png") defer imageFile.Close() response, err := client.Images.Edit(context.Background(), openai.ImageEditParams{ Model: openai.F("gpt-image-1.5"), Image: openai.F[openai.ImageEditParamsImageUnion](openai.NewImageFile("original.png", imageFile)), Prompt: openai.F("Add a rainbow in the sky"), N: openai.Int(1), Size: openai.F(openai.ImageEditParamsSize1024x1024), }) if err != nil { panic(err) } fmt.Println(response.Data[0].URL) ``` ```csharp C# / .NET theme={null} var imageClient = client.GetImageClient("gpt-image-1.5"); using var imageStream = File.OpenRead("original.png"); GeneratedImage editedImage = await imageClient.GenerateImageEditAsync( imageStream, "original.png", "Add a rainbow in the sky", new ImageEditOptions { Size = GeneratedImageSize.W1024xH1024 } ); Console.WriteLine(editedImage.ImageUri); ``` ```java Java theme={null} InputStream imageStream = Files.newInputStream(Path.of("original.png")); ImagesResponse response = client.images().edit(ImageEditParams.builder() .model("gpt-image-1.5") .image(imageStream) .prompt("Add a rainbow in the sky") .n(1) .size(ImageEditParams.Size._1024X1024) .build()); System.out.println(response.data().get(0).url().orElse(null)); ``` ```ruby Ruby theme={null} response = client.images.edit( model: "gpt-image-1.5", image: Pathname("original.png"), prompt: "Add a rainbow in the sky", size: "1024x1024" ) puts response.data.first.url ``` ## Generate videos Video generation is asynchronous. Submit a request with `client.videos.create()`, then poll with `client.videos.retrieve()` until the video is ready. ```python Python theme={null} import time # Submit a video generation request video = client.videos.create( model="sora-2", prompt="A capybara splashing in a river at golden hour", seconds=5, size="1920x1080", ) # Poll until the video is ready while video.status not in ("completed", "failed"): time.sleep(5) video = client.videos.retrieve(video.id) print(video.output.url) ``` ```typescript JavaScript / TypeScript theme={null} // Submit a video generation request let video = await client.videos.create({ model: "sora-2", prompt: "A capybara splashing in a river at golden hour", seconds: 5, size: "1920x1080", }); // Poll until the video is ready while (video.status !== "completed" && video.status !== "failed") { await new Promise((r) => setTimeout(r, 5000)); video = await client.videos.retrieve(video.id); } console.log(video.output.url); ``` ### Video generation options | Parameter | Type | Default | Description | | -------------- | ---------------- | -------- | ------------------------------------------------------------------- | | `model` | string | required | Model ID (e.g., `sora-2`) | | `prompt` | string | required | Text description of the desired video | | `seconds` | string or number | varies | Duration of the video in seconds | | `size` | string | varies | Video dimensions (e.g., `1920x1080`) or aspect ratio (e.g., `16:9`) | | `n` | integer | `1` | Number of videos to generate (1-4) | | `aspect_ratio` | string | - | Aspect ratio (e.g., `16:9`, `9:16`) | | `resolution` | string | - | Resolution shorthand (`720p`, `1080p`) | | `input_image` | string | - | URL of image for image-to-video generation | | `webhook_url` | string | - | URL for completion notification | ## Environment variables All SDKs support environment variables for configuration: ```bash theme={null} export OPENAI_API_KEY="lmnfl_your_api_key" export OPENAI_BASE_URL="https://api.lumenfall.ai/openai/v1" ``` Store your API key in environment variables rather than hardcoding it in your source code. Never commit API keys to version control. ## Image generation options | Parameter | Type | Default | Description | | ----------------- | ------- | ----------- | -------------------------------------------------------------------- | | `model` | string | required | Model ID (e.g., `gemini-3-pro-image`, `gpt-image-1.5`, `flux.2-max`) | | `prompt` | string | required | Text description of the desired image | | `n` | integer | `1` | Number of images to generate (1-10) | | `size` | string | `1024x1024` | Image dimensions | | `quality` | string | `standard` | Image quality (`standard` or `hd`) | | `response_format` | string | `url` | Response format (`url` or `b64_json`) | | `style` | string | `vivid` | Image style (`vivid` or `natural`) | ## Passing additional parameters Lumenfall passes through any additional parameters to the upstream provider. This allows you to use provider-specific features that aren't part of the standard OpenAI API. ```python Python theme={null} response = client.images.generate( model="gemini-3-pro-image", prompt="A capybara relaxing in a hot spring", size="1024x1024", extra_body={ "seed": 12345, "custom_provider_param": "value" } ) ``` ```typescript JavaScript / TypeScript theme={null} const response = await client.images.generate({ model: "gemini-3-pro-image", prompt: "A capybara relaxing in a hot spring", size: "1024x1024", // @ts-expect-error Provider-specific parameter seed: 12345, // @ts-expect-error Provider-specific parameter custom_provider_param: "value", }); ``` ```go Go theme={null} params := openai.ImageGenerateParams{ Model: openai.F("gemini-3-pro-image"), Prompt: openai.F("A capybara relaxing in a hot spring"), Size: openai.F(openai.ImageGenerateParamsSize1024x1024), } params.SetExtraFields(map[string]any{ "seed": 12345, "custom_provider_param": "value", }) response, err := client.Images.Generate(context.Background(), params) ``` ```csharp C# / .NET theme={null} var options = new ImageGenerationOptions(); options.Patch.Set("$.seed"u8, 12345); options.Patch.Set("$.custom_provider_param"u8, "value"); GeneratedImage image = await imageClient.GenerateImageAsync( "A capybara relaxing in a hot spring", options ); ``` ```java Java theme={null} ImagesResponse response = client.images().generate(ImageGenerateParams.builder() .model("gemini-3-pro-image") .prompt("A capybara relaxing in a hot spring") .size(ImageGenerateParams.Size._1024X1024) .putAdditionalBodyProperty("seed", JsonValue.from(12345)) .putAdditionalBodyProperty("custom_provider_param", JsonValue.from("value")) .build()); ``` ```ruby Ruby theme={null} response = client.images.generate( model: "gemini-3-pro-image", prompt: "A capybara relaxing in a hot spring", size: "1024x1024", request_options: { extra_body: { seed: 12345, custom_provider_param: "value" } } ) ``` Additional parameters are passed directly to the provider. Check the provider's documentation for supported parameters. Unsupported parameters may be silently ignored. ## Next steps Explore the full API documentation. See all available models. # Ruby (RubyLLM / ruby-openai) Source: https://docs.lumenfall.ai/client-libraries/ruby Use RubyLLM or ruby-openai with Lumenfall Besides the offical [OpenAI SDK for Ruby](/client-libraries/openai-sdk), there are two popular community Ruby libraries for working with OpenAI-compatible APIs: [RubyLLM](https://rubyllm.com) and [ruby-openai](https://github.com/alexrudall/ruby-openai). Both work seamlessly with Lumenfall. ## RubyLLM [RubyLLM](https://rubyllm.com) is currently the most popular Ruby gem for AI interactions according to GitHub stars. It provides a clean, Ruby-idiomatic interface with a beautiful DSL. ### Installation Add to your Gemfile: ```ruby theme={null} gem "ruby_llm" ``` Then run: ```bash theme={null} bundle install ``` ### Configuration When using RubyLLM with Lumenfall, you **must** include two parameters in every image generation call (see below for how to include them): * `provider: "openai"` - Tells RubyLLM to always route to Lumenfall's OpenAI-compatible API * `assume_model_exists: true` - Bypasses RubyLLM's model registry check to allow models that they don't officially support Without these parameters, RubyLLM may attempt to route requests to other providers and the call will fail. #### Global configuration Configure the API credentials globally: ```ruby theme={null} require "ruby_llm" RubyLLM.configure do |config| config.openai_api_key = ENV["LUMENFALL_API_KEY"] config.openai_api_base = "https://api.lumenfall.ai/openai/v1" end ``` #### Per-call configuration You can also configure RubyLLM per-call without global setup, using `RubyLLM.context`: ```ruby theme={null} require "ruby_llm" ctx = RubyLLM.context do |config| config.openai_api_key = ENV["LUMENFALL_API_KEY"] config.openai_api_base = "https://api.lumenfall.ai/openai/v1" end image = ctx.paint( "A capybara relaxing in a hot spring", model: "gemini-3-pro-image", size: "1024x1024", provider: "openai", assume_model_exists: true ) puts image.url ``` This is useful when you need to make multiple calls with the same configuration or when you need to use different API credentials in different parts of your application. ### Generate images Use the `paint` method with the required Lumenfall parameters: ```ruby theme={null} image = RubyLLM.paint( "A serene mountain landscape at sunset with dramatic clouds", model: "gemini-3-pro-image", size: "1024x1024", provider: "openai", assume_model_exists: true ) puts image.url ``` ### Image editing RubyLLM does not support image editing. This is a known limitation tracked in [GitHub issue #512](https://github.com/crmne/ruby_llm/issues/512). To edit images with Lumenfall, use an alternative SDK or make a HTTP call. *** ## ruby-openai [ruby-openai](https://github.com/alexrudall/ruby-openai) is a featureful community-maintained Ruby client that closely follows the OpenAI API structure. It is not the official OpenAI Ruby SDK. ### Installation Add to your Gemfile: ```ruby theme={null} gem "ruby-openai" ``` Then run: ```bash theme={null} bundle install ``` ### Configuration ```ruby theme={null} require "openai" client = OpenAI::Client.new( access_token: ENV["LUMENFALL_API_KEY"], uri_base: "https://api.lumenfall.ai/openai/v1" ) ``` Or configure globally: ```ruby theme={null} OpenAI.configure do |config| config.access_token = ENV["LUMENFALL_API_KEY"] config.uri_base = "https://api.lumenfall.ai/openai/v1" end client = OpenAI::Client.new ``` ### Generate images ```ruby theme={null} response = client.images.generate( parameters: { model: "gemini-3-pro-image", prompt: "A capybara relaxing in a hot spring", size: "1024x1024", n: 1 } ) puts response.dig("data", 0, "url") ``` #### Generate multiple images ```ruby theme={null} response = client.images.generate( parameters: { model: "gpt-image-1.5", prompt: "A capybara relaxing in a hot spring", size: "1024x1024", n: 2 } ) response.dig("data").each_with_index do |image, index| puts "Image #{index + 1}: #{image["url"]}" end ``` #### Get base64 response ```ruby theme={null} response = client.images.generate( parameters: { model: "gpt-image-1.5", prompt: "A capybara in a forest", response_format: "b64_json" } ) base64_image = response.dig("data", 0, "b64_json") ``` ### Edit images ```ruby theme={null} response = client.images.edit( parameters: { model: "gpt-image-1.5", image: File.open("original.png", "rb"), prompt: "Add a capybara to this image", size: "1024x1024", n: 1 } ) puts response.dig("data", 0, "url") ``` ### List models ```ruby theme={null} models = client.models.list models["data"].each do |model| puts model["id"] end ``` ### Retrieve a specific model ```ruby theme={null} model = client.models.retrieve(id: "gemini-3-pro-image") puts model["id"] puts model["object"] ``` ## Next steps Explore the full API documentation. See all available image generation models. # Vercel AI SDK Source: https://docs.lumenfall.ai/client-libraries/vercel-ai-sdk Use the Vercel AI SDK with Lumenfall The [Vercel AI SDK](https://ai-sdk.dev/) is a popular TypeScript library for building AI-powered applications. It provides a unified API for image generation that works seamlessly with Lumenfall. ## Approaches There are two ways to use the Vercel AI SDK with Lumenfall: | Approach | Package | Status | Best for | | ---------------------- | ---------------- | ------------- | -------------------------------------- | | **OpenAI provider** | `@ai-sdk/openai` | Available now | Quick setup, generation | | **Lumenfall provider** | Coming soon | Coming soon | Full feature support including editing | The `@lumenfall/ai-sdk` community provider is currently in development and will be published to npm soon. In the meantime, use the OpenAI provider - Lumenfall's API is fully OpenAI-compatible. ## Installation ```bash theme={null} npm install ai @ai-sdk/openai ``` ## Configuration Lumenfall's API is OpenAI-compatible, so you can use the `@ai-sdk/openai` provider and point it at Lumenfall - no extra dependencies needed. Use `createOpenAI` to create a provider that points to Lumenfall: ```typescript theme={null} import { createOpenAI } from "@ai-sdk/openai"; const lumenfall = createOpenAI({ apiKey: process.env.LUMENFALL_API_KEY, baseURL: "https://api.lumenfall.ai/openai/v1", }); ``` **Never expose your API key in client-side code.** Always make API calls from server-side routes (API routes, Server Actions, or server components) where the key remains on the server. See [below](#complete-example:-next-js-api-route) for an example setup. ## Generate images Use the `generateImage` function with your Lumenfall provider: ```typescript theme={null} import { generateImage } from "ai"; import { createOpenAI } from "@ai-sdk/openai"; const lumenfall = createOpenAI({ apiKey: process.env.LUMENFALL_API_KEY, baseURL: "https://api.lumenfall.ai/openai/v1", }); const { image } = await generateImage({ model: lumenfall.image("gemini-3-pro-image"), prompt: "A capybara lounging in a mountain hot spring at sunset", size: "1024x1024", }); // Access the image as base64 or Uint8Array console.log(image.base64); console.log(image.uint8Array); ``` ### Generate multiple images Request multiple images with the `n` parameter: ```typescript theme={null} const { images } = await generateImage({ model: lumenfall.image("gpt-image-1.5"), prompt: "A capybara in a field of sunflowers, watercolor style", n: 4, size: "1024x1024", }); for (const image of images) { console.log(image.base64); } ``` ## Edit images AI SDK 6 supports image editing by passing reference images in a structured `prompt` object: ```typescript theme={null} import { generateImage } from "ai"; import { createOpenAI } from "@ai-sdk/openai"; import fs from "fs"; const lumenfall = createOpenAI({ apiKey: process.env.LUMENFALL_API_KEY, baseURL: "https://api.lumenfall.ai/openai/v1", }); const imageBuffer = fs.readFileSync("original.png"); const { image } = await generateImage({ model: lumenfall.image("gpt-image-1"), prompt: { text: "Add a capybara sitting in the foreground", images: [imageBuffer], }, providerOptions: { openai: { response_format: "b64_json", }, }, }); ``` The `@ai-sdk/openai` provider does not include `response_format` in edit requests because OpenAI's gpt-image models return base64 by default. Lumenfall defaults to URLs (as this makes most sense for users and is the earlier default of the OpenAI API), so you must pass `response_format: "b64_json"` via `providerOptions.openai` for the SDK to parse the response correctly. The upcoming `@lumenfall/ai-sdk` provider will handle this automatically. ## Passing additional parameters Use `providerOptions` to pass provider-specific parameters that aren't part of the standard interface: ```typescript theme={null} const { image } = await generateImage({ model: lumenfall.image("gpt-image-1"), prompt: "A capybara relaxing in a hot spring", size: "1024x1024", providerOptions: { openai: { quality: "high", background: "transparent", output_format: "png", }, }, }); ``` Parameters in `providerOptions.openai` are passed directly to the upstream provider. Supported provider options depend on the model. Check the model's documentation for details. ## Complete example: Next.js API route Here's how to securely use Lumenfall in a Next.js app. The API key stays on the server, and the client calls through a server route. ### API route ```typescript theme={null} // app/api/generate-image/route.ts import { generateImage } from "ai"; import { createOpenAI } from "@ai-sdk/openai"; import { NextResponse } from "next/server"; const lumenfall = createOpenAI({ apiKey: process.env.LUMENFALL_API_KEY, baseURL: "https://api.lumenfall.ai/openai/v1", }); export async function POST(request: Request) { const { prompt, model = "gemini-3-pro-image", size = "1024x1024" } = await request.json(); const { images } = await generateImage({ model: lumenfall.image(model), prompt, size, }); return NextResponse.json({ images: images.map((img) => img.base64), }); } ``` ### Client component ```tsx theme={null} // app/components/ImageGenerator.tsx "use client"; import { useState } from "react"; export function ImageGenerator() { const [prompt, setPrompt] = useState(""); const [imageData, setImageData] = useState([]); const [loading, setLoading] = useState(false); const [error, setError] = useState(""); async function handleSubmit(e: React.FormEvent) { e.preventDefault(); setLoading(true); setError(""); setImageData([]); try { const response = await fetch("/api/generate-image", { method: "POST", headers: { "Content-Type": "application/json" }, body: JSON.stringify({ prompt }), }); if (!response.ok) { const data = await response.json(); setError(data.error || "Failed to generate image"); return; } const data = await response.json(); setImageData(data.images); } catch (err) { setError(err instanceof Error ? err.message : "An error occurred"); } finally { setLoading(false); } } return (