Skip to main content
Lumenfall integrates with multiple AI providers, giving you access to a wide range of AI models through a single API. Providers handle the actual generation — Lumenfall routes your requests to them and handles failover if a provider is unavailable.

Supported providers

Provider availability and supported models may change. Check the model catalog for current models and their providers.

How providers work

When you make a request, Lumenfall:
  1. Matches your requested model to providers that support it
  2. Selects the best provider based on routing rules
  3. Sends your request to the provider
  4. Returns the response (or fails over to another provider if needed)
You don’t need to manage provider credentials or handle provider-specific APIs—Lumenfall abstracts this for you.

Forcing a specific provider

By default, Lumenfall automatically selects the best provider. You can bypass routing and force a specific provider by prefixing the model name with the provider slug from the table above. See Routing: Forcing a specific provider for details and examples.

Provider status

Check the Lumenfall Status page for real-time provider availability and incident reports. Each API response includes headers showing which provider handled your request:
X-Lumenfall-Provider: vertex
X-Lumenfall-Model: gemini-3-pro-image

OpenAI-compatible providers

Lumenfall also supports custom OpenAI-compatible endpoints. This allows integration with self-hosted models or other providers that implement the OpenAI API specification.
Custom provider configuration is currently available by request. Contact support if you need to connect a custom endpoint.