Skip to main content
Lumenfall is fully compatible with all official OpenAI SDKs. Since Lumenfall implements the OpenAI API specification, you can use any official SDK by simply changing the base URL and API key. Official SDKs:

Installation

pip install openai

Configuration

from openai import OpenAI

client = OpenAI(
    api_key="lmnfl_your_api_key",
    base_url="https://api.lumenfall.ai/openai/v1"
)

Chat completions

response = client.chat.completions.create(
    model="google/gemini-3-flash-preview",
    messages=[
        {"role": "user", "content": "Why are capybaras so chill?"}
    ]
)

print(response.choices[0].message.content)

Streaming

stream = client.chat.completions.create(
    model="google/gemini-3-flash-preview",
    messages=[
        {"role": "user", "content": "Tell me a fun fact about capybaras"}
    ],
    stream=True
)

for chunk in stream:
    if chunk.choices[0].delta.content:
        print(chunk.choices[0].delta.content, end="")

Generate images

response = client.images.generate(
    model="gemini-3-pro-image",
    prompt="A serene mountain landscape at sunset with dramatic clouds",
    n=1,
    size="1024x1024"
)

print(response.data[0].url)

Edit images

response = client.images.edit(
    model="gpt-image-1.5",
    image=open("original.png", "rb"),
    prompt="Add a rainbow in the sky",
    n=1,
    size="1024x1024"
)

print(response.data[0].url)

Generate videos

Video generation is asynchronous. Submit a request with client.videos.create(), then poll with client.videos.retrieve() until the video is ready.
import time

# Submit a video generation request
video = client.videos.create(
    model="sora-2",
    prompt="A capybara splashing in a river at golden hour",
    seconds=5,
    size="1920x1080",
)

# Poll until the video is ready
while video.status not in ("completed", "failed"):
    time.sleep(5)
    video = client.videos.retrieve(video.id)

print(video.output.url)

Video generation options

ParameterTypeDefaultDescription
modelstringrequiredModel ID (e.g., sora-2)
promptstringrequiredText description of the desired video
secondsstring or numbervariesDuration of the video in seconds
sizestringvariesVideo dimensions (e.g., 1920x1080) or aspect ratio (e.g., 16:9)
ninteger1Number of videos to generate (1-4)
aspect_ratiostring-Aspect ratio (e.g., 16:9, 9:16)
resolutionstring-Resolution shorthand (720p, 1080p)
input_imagestring-URL of image for image-to-video generation
webhook_urlstring-URL for completion notification

Environment variables

All SDKs support environment variables for configuration:
export OPENAI_API_KEY="lmnfl_your_api_key"
export OPENAI_BASE_URL="https://api.lumenfall.ai/openai/v1"
Store your API key in environment variables rather than hardcoding it in your source code. Never commit API keys to version control.

Image generation options

ParameterTypeDefaultDescription
modelstringrequiredModel ID (e.g., gemini-3-pro-image, gpt-image-1.5, flux.2-max)
promptstringrequiredText description of the desired image
ninteger1Number of images to generate (1-10)
sizestring1024x1024Image dimensions
qualitystringstandardImage quality (standard or hd)
response_formatstringurlResponse format (url or b64_json)
stylestringvividImage style (vivid or natural)

Passing additional parameters

Lumenfall passes through any additional parameters to the upstream provider. This allows you to use provider-specific features that aren’t part of the standard OpenAI API.
response = client.images.generate(
    model="gemini-3-pro-image",
    prompt="A capybara relaxing in a hot spring",
    size="1024x1024",
    extra_body={
        "seed": 12345,
        "custom_provider_param": "value"
    }
)
Additional parameters are passed directly to the provider. Check the provider’s documentation for supported parameters. Unsupported parameters may be silently ignored.

Next steps