LiteLLM is a popular Python library that provides a unified interface to 100+ LLM providers. You can use LiteLLM with Lumenfall to access multiple image generation models through a single API.
How it works
LiteLLM uses provider prefixes to route requests to different backends. When you use the openai/ prefix with a custom api_base, LiteLLM:
- Routes to OpenAI-compatible endpoints: The
openai/ prefix tells LiteLLM to use the OpenAI client
- Passes the model name through: After stripping the prefix, the model name (e.g.,
gemini-3-pro-image) is sent directly to your api_base
- Preserves parameters: Parameters like
size, quality, and style are passed through unchanged
Lumenfall handles parameter transformation on the backend. When you request size="1024x1024" for a model that uses different dimensions, Lumenfall automatically maps it to the closest supported size.
Installation
Configuration
Configure LiteLLM to use Lumenfall as an OpenAI-compatible provider:
import litellm
import os
# Set your Lumenfall API key
os.environ["OPENAI_API_KEY"] = "lmnfl_your_api_key"
# Configure the base URL for Lumenfall
os.environ["OPENAI_API_BASE"] = "https://api.lumenfall.ai/openai/v1"
Alternatively, pass the configuration directly:
import litellm
litellm.api_key = "lmnfl_your_api_key"
litellm.api_base = "https://api.lumenfall.ai/openai/v1"
Generate images
Use litellm.image_generation() to create images:
import litellm
response = litellm.image_generation(
model="openai/gemini-3-pro-image", # prefix with "openai/" for OpenAI-compatible endpoints
prompt="A serene mountain landscape at sunset with dramatic clouds",
n=1,
size="1024x1024",
api_key="lmnfl_your_api_key",
api_base="https://api.lumenfall.ai/openai/v1"
)
image_url = response.data[0].url
print(image_url)
Using different models
# Use GPT Image 1.5
response = litellm.image_generation(
model="openai/gpt-image-1.5",
prompt="A cyberpunk cityscape",
api_key="lmnfl_your_api_key",
api_base="https://api.lumenfall.ai/openai/v1"
)
# Use Flux.2 Max
response = litellm.image_generation(
model="openai/flux.2-max",
prompt="An abstract painting of emotions",
api_key="lmnfl_your_api_key",
api_base="https://api.lumenfall.ai/openai/v1"
)
Generation options
response = litellm.image_generation(
model="openai/gpt-image-1.5",
prompt="A beautiful garden with roses",
n=1,
size="1792x1024", # landscape orientation
quality="hd",
style="natural",
response_format="url",
api_key="lmnfl_your_api_key",
api_base="https://api.lumenfall.ai/openai/v1"
)
Edit images
Use litellm.image_edit() to edit images:
import litellm
response = litellm.image_edit(
model="openai/gpt-image-1.5", # prefix with "openai/" for OpenAI-compatible endpoints
image=open("original.png", "rb"),
prompt="Add a rainbow in the sky",
n=1,
size="1024x1024",
api_key="lmnfl_your_api_key",
api_base="https://api.lumenfall.ai/openai/v1"
)
edited_url = response.data[0].url
print(edited_url)
With a mask
response = litellm.image_edit(
model="openai/gpt-image-1.5",
image=open("original.png", "rb"),
mask=open("mask.png", "rb"), # transparent areas will be edited
prompt="Replace with a sunny beach",
api_key="lmnfl_your_api_key",
api_base="https://api.lumenfall.ai/openai/v1"
)
Async image editing
import asyncio
import litellm
async def edit_image():
response = await litellm.aimage_edit(
model="openai/gpt-image-1.5",
image=open("original.png", "rb"),
prompt="Make it look like winter",
api_key="lmnfl_your_api_key",
api_base="https://api.lumenfall.ai/openai/v1"
)
return response.data[0].url
url = asyncio.run(edit_image())
print(url)
LiteLLM Proxy
You can also use the LiteLLM Proxy server to route requests to Lumenfall. Add this to your litellm_config.yaml:
model_list:
- model_name: lumenfall-gemini-image
litellm_params:
model: openai/gemini-3-pro-image
api_key: lmnfl_your_api_key
api_base: https://api.lumenfall.ai/openai/v1
- model_name: lumenfall-gpt-image
litellm_params:
model: openai/gpt-image-1.5
api_key: lmnfl_your_api_key
api_base: https://api.lumenfall.ai/openai/v1
- model_name: lumenfall-flux
litellm_params:
model: openai/flux.2-max
api_key: lmnfl_your_api_key
api_base: https://api.lumenfall.ai/openai/v1
Start the proxy:
litellm --config litellm_config.yaml
Then make requests to your proxy:
import litellm
response = litellm.image_generation(
model="lumenfall-gemini-image",
prompt="A serene lake at dawn",
api_base="http://localhost:4000" # LiteLLM proxy URL
)
Error handling
from litellm import APIError, AuthenticationError, RateLimitError
try:
response = litellm.image_generation(
model="openai/gemini-3-pro-image",
prompt="A beautiful sunset",
api_key="lmnfl_your_api_key",
api_base="https://api.lumenfall.ai/openai/v1"
)
except AuthenticationError:
print("Invalid API key")
except RateLimitError:
print("Rate limit exceeded")
except APIError as e:
print(f"API error: {e}")
Async support
import asyncio
import litellm
async def generate_image():
response = await litellm.aimage_generation(
model="openai/gemini-3-pro-image",
prompt="A futuristic space station",
api_key="lmnfl_your_api_key",
api_base="https://api.lumenfall.ai/openai/v1"
)
return response.data[0].url
# Run async function
url = asyncio.run(generate_image())
print(url)
Next steps