Skip to main content
Lumenfall implements the OpenAI API specification for chat completions, image generation, and video generation. This means you can use any OpenAI-compatible SDK, tool, or library with Lumenfall by simply changing the base URL.

Supported endpoints

EndpointDescription
POST /chat/completionsGenerate text from a conversation
POST /images/generationsGenerate images from text prompts
POST /images/editsEdit images using text instructions
POST /videosGenerate videos from text or image prompts
GET /videos/{id}Get video generation status and output
GET /modelsList available models
GET /models/{id}Get details about a specific model
All endpoints are served under the base URL:
https://api.lumenfall.ai/openai/v1

Using OpenAI SDKs

Any official OpenAI SDK works with Lumenfall. Configure the base URL and use your Lumenfall API key:
from openai import OpenAI

client = OpenAI(
    api_key="lmnfl_your_api_key",
    base_url="https://api.lumenfall.ai/openai/v1"
)

# Chat completion
response = client.chat.completions.create(
    model="google/gemini-3-flash-preview",
    messages=[{"role": "user", "content": "Why are capybaras so chill?"}]
)

# Image generation
response = client.images.generate(
    model="gemini-3-pro-image",
    prompt="A capybara relaxing in a hot spring"
)

# Video generation (async - returns immediately)
video = client.videos.create(
    model="sora-2",
    prompt="A capybara swimming in a pool",
    seconds=5,
)
See the OpenAI SDK guide for complete examples in all supported languages.
Provider-specific parameters: Any parameters not part of the standard OpenAI API are passed through to the upstream provider. This lets you use provider-specific features like seed without waiting for explicit support. See Passing additional parameters.

Environment variables

You can configure most OpenAI-compatible tools using environment variables:
export OPENAI_API_KEY="lmnfl_your_api_key"
export OPENAI_BASE_URL="https://api.lumenfall.ai/openai/v1"

Errors

Lumenfall returns errors using the same format and codes as the OpenAI API. Provider-specific errors are transformed to match OpenAI’s error structure, so your existing error handling works without changes. See Unified Model Behavior for details on how errors are normalized across providers.

Next steps