Skip to main content
llms.py is a lightweight CLI and web UI to access hundreds of AI models across many providers. Its out of the box support for image models is limited. With the Lumenfall extension, you can access all of our image models inside llms.py.

How it works

The Lumenfall extension registers as an image generation provider inside llms.py. Once installed, you can generate images from any terminal with a single command:
llms --out image "A capybara relaxing in a hot spring" -m gemini-3-pro-image
The extension:
  1. Registers as a provider once added to ~/.llms/llms.json and LUMENFALL_API_KEY is set
  2. Routes all image requests through Lumenfall’s unified API
  3. Caches generated images to disk and displays local file paths

Prerequisites

Installation

llms --add lumenfall-ai/llmspy-lumenfall

Configuration

1. Set your API key

The API key is configured via the LUMENFALL_API_KEY environment variable. To make it available in every shell session, add it to your shell profile:
# Bash (~/.bashrc or ~/.bash_profile)
echo 'export LUMENFALL_API_KEY="lmnfl_your_api_key"' >> ~/.bashrc
source ~/.bashrc

# Zsh (~/.zshrc)
echo 'export LUMENFALL_API_KEY="lmnfl_your_api_key"' >> ~/.zshrc
source ~/.zshrc

2. Register the provider

Add Lumenfall to the providers section of ~/.llms/llms.json:
{
  "providers": {
    "lumenfall": { "enabled": true, "npm": "llmspy_lumenfall" }
  }
}
This step is required for the extension to work.

Generate images

Basic generation

llms --out image "A capybara relaxing in a hot spring" -m gemini-3-pro-image
Output:
Saved files:
/home/user/.llms/cache/ab/abc123def456.png
http://localhost:8000/~cache/ab/abc123def456.png

Using different models

# GPT Image 1.5
llms --out image "A capybara wearing a tiny sombrero" -m gpt-image-1.5

# FLUX 2 Max
llms --out image "A capybara in a neon-lit Tokyo alley" -m flux.2-max

# Seedream 4.5
llms --out image "A capybara painting a self-portrait" -m seedream-4.5

Edit images

Image editing is also supported through Lumenfall.
llms "Add a tiny sombrero to the capybara" -m gpt-image-1 --image photo.png
Even image editing models that usually can’t be used in a conversational interface, like Seedream-4.5, can be used with turn based editing.

Web UI

llms.py includes a built-in web UI where all Lumenfall functionality - image generation and editing - works the same as on the CLI. Start it with:
llms --server 8000
Then open http://localhost:8000 in your browser.

Available models

Lumenfall strives to offer every image model that exists, backend by multiple providers. You can find our current selection in the model catalog. For models that are available on other providers in llmspy, the first match is used. See “Routing” below to control this.

Routing with other providers

llms.py uses the first matching provider for a given model. To ensure all image requests go through Lumenfall, list it first in your providers config - before any other providers that may also serve image models (e.g. Google, OpenAI, or OpenRouter):
{
  "providers": {
    "lumenfall": { "enabled": true, "npm": "llmspy_lumenfall" },
    "google": { ... },
    "openai": { ... },
    ...
  }
}

Known limitations

  • Provider forcing not supported. Lumenfall providers cannot be forced by prepending the provider in the model (e.g. replicate/gemini-3-pro-image) as usual.

Next steps