Skip to main content
Lumenfall integrates with multiple AI providers, giving you access to a wide range of AI models through a single API. Providers handle the actual generation - Lumenfall routes your requests to them and handles failover if a provider is unavailable.

Text providers

ProviderSlugDescription
OpenRouteropenrouterAccess to 300+ text models from OpenAI, Google, Anthropic, Meta, and more
Text models are accessed using OpenRouter model identifiers (e.g., google/gemini-3-flash-preview, openai/gpt-5.4). Browse available models on the OpenRouter models page.

Image providers

Provider availability and supported models may change. Check the model catalog for current models and their providers.

How providers work

When you make a request, Lumenfall:
  1. Matches your requested model to providers that support it
  2. Selects the best provider based on routing rules
  3. Sends your request to the provider
  4. Returns the response (or fails over to another provider if needed)
You don’t need to manage provider credentials or handle provider-specific APIs - Lumenfall abstracts this for you.

Forcing a specific provider

By default, Lumenfall automatically selects the best provider. You can bypass routing and force a specific provider by prefixing the model name with the provider slug from the table above.
gemini-3-pro-image → vertex/gemini-3-pro-image
See Routing: Forcing a specific provider for details and examples.

OpenAI-compatible providers

Lumenfall also supports custom OpenAI-compatible endpoints. This allows integration with self-hosted models or other providers that implement the OpenAI API specification.
Custom provider configuration is currently available by request. Contact support if you need to connect a custom endpoint.