The Vercel AI SDK is a popular TypeScript library for building AI-powered applications. It provides a unified API for image generation that works seamlessly with Lumenfall.
Approaches
There are two ways to use the Vercel AI SDK with Lumenfall:
| Approach | Package | Status | Best for |
|---|
| OpenAI provider | @ai-sdk/openai | Available now | Quick setup, generation |
| Lumenfall provider | Coming soon | Coming soon | Full feature support including editing |
The @lumenfall/ai-sdk community provider is currently in development and will be published to npm soon. In the meantime, use the OpenAI provider - Lumenfall’s API is fully OpenAI-compatible.
Installation
npm install ai @ai-sdk/openai
Configuration
Lumenfall’s API is OpenAI-compatible, so you can use the @ai-sdk/openai provider and point it at Lumenfall - no extra dependencies needed.
Use createOpenAI to create a provider that points to Lumenfall:
import { createOpenAI } from "@ai-sdk/openai";
const lumenfall = createOpenAI({
apiKey: process.env.LUMENFALL_API_KEY,
baseURL: "https://api.lumenfall.ai/openai/v1",
});
Never expose your API key in client-side code. Always make API calls from server-side routes (API routes, Server Actions, or server components) where the key remains on the server. See below for an example setup.
Generate images
Use the generateImage function with your Lumenfall provider:
import { generateImage } from "ai";
import { createOpenAI } from "@ai-sdk/openai";
const lumenfall = createOpenAI({
apiKey: process.env.LUMENFALL_API_KEY,
baseURL: "https://api.lumenfall.ai/openai/v1",
});
const { image } = await generateImage({
model: lumenfall.image("gemini-3-pro-image"),
prompt: "A capybara lounging in a mountain hot spring at sunset",
size: "1024x1024",
});
// Access the image as base64 or Uint8Array
console.log(image.base64);
console.log(image.uint8Array);
Generate multiple images
Request multiple images with the n parameter:
const { images } = await generateImage({
model: lumenfall.image("gpt-image-1.5"),
prompt: "A capybara in a field of sunflowers, watercolor style",
n: 4,
size: "1024x1024",
});
for (const image of images) {
console.log(image.base64);
}
Edit images
AI SDK 6 supports image editing by passing reference images in a structured prompt object:
import { generateImage } from "ai";
import { createOpenAI } from "@ai-sdk/openai";
import fs from "fs";
const lumenfall = createOpenAI({
apiKey: process.env.LUMENFALL_API_KEY,
baseURL: "https://api.lumenfall.ai/openai/v1",
});
const imageBuffer = fs.readFileSync("original.png");
const { image } = await generateImage({
model: lumenfall.image("gpt-image-1"),
prompt: {
text: "Add a capybara sitting in the foreground",
images: [imageBuffer],
},
providerOptions: {
openai: {
response_format: "b64_json",
},
},
});
The @ai-sdk/openai provider does not include response_format in edit requests because OpenAI’s gpt-image models return base64 by default. Lumenfall defaults to URLs (as this makes most sense for users and is the earlier default of the OpenAI API), so you must pass response_format: "b64_json" via providerOptions.openai for the SDK to parse the response correctly. The upcoming @lumenfall/ai-sdk provider will handle this automatically.
Passing additional parameters
Use providerOptions to pass provider-specific parameters that aren’t part of the standard interface:
const { image } = await generateImage({
model: lumenfall.image("gpt-image-1"),
prompt: "A capybara relaxing in a hot spring",
size: "1024x1024",
providerOptions: {
openai: {
quality: "high",
background: "transparent",
output_format: "png",
},
},
});
Parameters in providerOptions.openai are passed directly to the upstream provider.
Supported provider options depend on the model. Check the model’s documentation for details.
Complete example: Next.js API route
Here’s how to securely use Lumenfall in a Next.js app. The API key stays on the server, and the client calls through a server route.
API route
// app/api/generate-image/route.ts
import { generateImage } from "ai";
import { createOpenAI } from "@ai-sdk/openai";
import { NextResponse } from "next/server";
const lumenfall = createOpenAI({
apiKey: process.env.LUMENFALL_API_KEY,
baseURL: "https://api.lumenfall.ai/openai/v1",
});
export async function POST(request: Request) {
const { prompt, model = "gemini-3-pro-image", size = "1024x1024" } =
await request.json();
const { images } = await generateImage({
model: lumenfall.image(model),
prompt,
size,
});
return NextResponse.json({
images: images.map((img) => img.base64),
});
}
Client component
// app/components/ImageGenerator.tsx
"use client";
import { useState } from "react";
export function ImageGenerator() {
const [prompt, setPrompt] = useState("");
const [imageData, setImageData] = useState<string[]>([]);
const [loading, setLoading] = useState(false);
const [error, setError] = useState("");
async function handleSubmit(e: React.FormEvent) {
e.preventDefault();
setLoading(true);
setError("");
setImageData([]);
try {
const response = await fetch("/api/generate-image", {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify({ prompt }),
});
if (!response.ok) {
const data = await response.json();
setError(data.error || "Failed to generate image");
return;
}
const data = await response.json();
setImageData(data.images);
} catch (err) {
setError(err instanceof Error ? err.message : "An error occurred");
} finally {
setLoading(false);
}
}
return (
<form onSubmit={handleSubmit}>
<textarea
value={prompt}
onChange={(e) => setPrompt(e.target.value)}
placeholder="Describe the image you want to generate..."
/>
<button type="submit" disabled={loading}>
{loading ? "Generating..." : "Generate Image"}
</button>
{error && <p>{error}</p>}
{imageData.map((img, i) => (
<img key={i} src={`data:image/png;base64,${img}`} alt={`Generated ${i + 1}`} />
))}
</form>
);
}
Next steps