Use this file to discover all available pages before exploring further.
The Vercel AI SDK is a popular TypeScript library for building AI-powered applications. It provides a unified API for image generation that works seamlessly with Lumenfall.
There are two ways to use the Vercel AI SDK with Lumenfall:
Approach
Package
Status
Best for
OpenAI provider
@ai-sdk/openai
Available now
Quick setup, generation
Lumenfall provider
Coming soon
Coming soon
Full feature support including editing
The @lumenfall/ai-sdk community provider is currently in development and will be published to npm soon. In the meantime, use the OpenAI provider - Lumenfall’s API is fully OpenAI-compatible.
Lumenfall’s API is OpenAI-compatible, so you can use the @ai-sdk/openai provider and point it at Lumenfall - no extra dependencies needed.Use createOpenAI to create a provider that points to Lumenfall:
Never expose your API key in client-side code. Always make API calls from server-side routes (API routes, Server Actions, or server components) where the key remains on the server. See below for an example setup.
Use the generateImage function with your Lumenfall provider:
import { generateImage } from "ai";import { createOpenAI } from "@ai-sdk/openai";const lumenfall = createOpenAI({ apiKey: process.env.LUMENFALL_API_KEY, baseURL: "https://api.lumenfall.ai/openai/v1",});const { image } = await generateImage({ model: lumenfall.image("gemini-3-pro-image"), prompt: "A capybara lounging in a mountain hot spring at sunset", size: "1024x1024",});// Access the image as base64 or Uint8Arrayconsole.log(image.base64);console.log(image.uint8Array);
const { images } = await generateImage({ model: lumenfall.image("gpt-image-1.5"), prompt: "A capybara in a field of sunflowers, watercolor style", n: 4, size: "1024x1024",});for (const image of images) { console.log(image.base64);}
AI SDK 6 supports image editing by passing reference images in a structured prompt object:
import { generateImage } from "ai";import { createOpenAI } from "@ai-sdk/openai";import fs from "fs";const lumenfall = createOpenAI({ apiKey: process.env.LUMENFALL_API_KEY, baseURL: "https://api.lumenfall.ai/openai/v1",});const imageBuffer = fs.readFileSync("original.png");const { image } = await generateImage({ model: lumenfall.image("gpt-image-1"), prompt: { text: "Add a capybara sitting in the foreground", images: [imageBuffer], }, providerOptions: { openai: { response_format: "b64_json", }, },});
The @ai-sdk/openai provider does not include response_format in edit requests because OpenAI’s gpt-image models return base64 by default. Lumenfall defaults to URLs (as this makes most sense for users and is the earlier default of the OpenAI API), so you must pass response_format: "b64_json" via providerOptions.openai for the SDK to parse the response correctly. The upcoming @lumenfall/ai-sdk provider will handle this automatically.