> ## Documentation Index
> Fetch the complete documentation index at: https://docs.lumenfall.ai/llms.txt
> Use this file to discover all available pages before exploring further.

# OpenAI SDK

> Use the official OpenAI SDKs with Lumenfall

Lumenfall is fully compatible with all official OpenAI SDKs. Since Lumenfall implements the OpenAI API specification, you can use any official SDK by simply changing the base URL and API key.

**Official SDKs:**

* [openai-python](https://github.com/openai/openai-python) - Python
* [openai-node](https://github.com/openai/openai-node) - JavaScript / TypeScript
* [openai-go](https://github.com/openai/openai-go) - Go
* [openai-dotnet](https://github.com/openai/openai-dotnet) - C# / .NET
* [openai-java](https://github.com/openai/openai-java) - Java
* [openai-ruby](https://github.com/openai/openai-ruby) - Ruby

## Installation

<CodeGroup>
  ```bash Python theme={null}
  pip install openai
  ```

  ```bash JavaScript / TypeScript theme={null}
  npm install openai
  ```

  ```bash Go theme={null}
  go get github.com/openai/openai-go/v3
  ```

  ```bash C# / .NET theme={null}
  dotnet add package OpenAI
  ```

  ```xml Java (Maven) theme={null}
  <dependency>
      <groupId>com.openai</groupId>
      <artifactId>openai-java</artifactId>
      <version>0.20.0</version>
  </dependency>
  ```

  ```bash Ruby theme={null}
  gem install openai
  ```
</CodeGroup>

## Configuration

<CodeGroup>
  ```python Python theme={null}
  from openai import OpenAI

  client = OpenAI(
      api_key="lmnfl_your_api_key",
      base_url="https://api.lumenfall.ai/openai/v1"
  )
  ```

  ```typescript JavaScript / TypeScript theme={null}
  import OpenAI from "openai";

  const client = new OpenAI({
    apiKey: "lmnfl_your_api_key",
    baseURL: "https://api.lumenfall.ai/openai/v1",
  });
  ```

  ```go Go theme={null}
  package main

  import (
      "github.com/openai/openai-go/v3"
      "github.com/openai/openai-go/v3/option"
  )

  func main() {
      client := openai.NewClient(
          option.WithAPIKey("lmnfl_your_api_key"),
          option.WithBaseURL("https://api.lumenfall.ai/openai/v1"),
      )
  }
  ```

  ```csharp C# / .NET theme={null}
  using OpenAI;
  using OpenAI.Images;

  var options = new OpenAIClientOptions
  {
      Endpoint = new Uri("https://api.lumenfall.ai/openai/v1")
  };

  var client = new OpenAIClient("lmnfl_your_api_key", options);
  var imageClient = client.GetImageClient("gemini-3-pro-image");
  ```

  ```java Java theme={null}
  import com.openai.client.OpenAIClient;
  import com.openai.client.okhttp.OpenAIOkHttpClient;

  OpenAIClient client = OpenAIOkHttpClient.builder()
      .apiKey("lmnfl_your_api_key")
      .baseUrl("https://api.lumenfall.ai/openai/v1")
      .build();
  ```

  ```ruby Ruby theme={null}
  require "openai"

  client = OpenAI::Client.new(
    api_key: "lmnfl_your_api_key",
    base_url: "https://api.lumenfall.ai/openai/v1"
  )
  ```
</CodeGroup>

## Chat completions

<CodeGroup>
  ```python Python theme={null}
  response = client.chat.completions.create(
      model="google/gemini-3-flash-preview",
      messages=[
          {"role": "user", "content": "Why are capybaras so chill?"}
      ]
  )

  print(response.choices[0].message.content)
  ```

  ```typescript JavaScript / TypeScript theme={null}
  const response = await client.chat.completions.create({
    model: "google/gemini-3-flash-preview",
    messages: [
      { role: "user", content: "Why are capybaras so chill?" },
    ],
  });

  console.log(response.choices[0].message.content);
  ```

  ```go Go theme={null}
  response, err := client.Chat.Completions.New(context.Background(), openai.ChatCompletionNewParams{
      Model: openai.F("google/gemini-3-flash-preview"),
      Messages: openai.F([]openai.ChatCompletionMessageParamUnion{
          openai.UserMessage("Why are capybaras so chill?"),
      }),
  })
  if err != nil {
      panic(err)
  }

  fmt.Println(response.Choices[0].Message.Content)
  ```

  ```csharp C# / .NET theme={null}
  var chatClient = client.GetChatClient("google/gemini-3-flash-preview");

  ChatCompletion response = await chatClient.CompleteChatAsync(
      [new UserChatMessage("Why are capybaras so chill?")]
  );

  Console.WriteLine(response.Content[0].Text);
  ```

  ```java Java theme={null}
  var params = ChatCompletionCreateParams.builder()
      .model("google/gemini-3-flash-preview")
      .addMessage(ChatCompletionUserMessageParam.builder()
          .content("Why are capybaras so chill?")
          .build())
      .build();

  var response = client.chat().completions().create(params);
  System.out.println(response.choices().get(0).message().content().orElse(null));
  ```

  ```ruby Ruby theme={null}
  response = client.chat.completions.create(
    model: "google/gemini-3-flash-preview",
    messages: [
      { role: "user", content: "Why are capybaras so chill?" }
    ]
  )

  puts response.choices.first.message.content
  ```
</CodeGroup>

### Streaming

<CodeGroup>
  ```python Python theme={null}
  stream = client.chat.completions.create(
      model="google/gemini-3-flash-preview",
      messages=[
          {"role": "user", "content": "Tell me a fun fact about capybaras"}
      ],
      stream=True
  )

  for chunk in stream:
      if chunk.choices[0].delta.content:
          print(chunk.choices[0].delta.content, end="")
  ```

  ```typescript JavaScript / TypeScript theme={null}
  const stream = await client.chat.completions.create({
    model: "google/gemini-3-flash-preview",
    messages: [
      { role: "user", content: "Tell me a fun fact about capybaras" },
    ],
    stream: true,
  });

  for await (const chunk of stream) {
    const content = chunk.choices[0]?.delta?.content;
    if (content) process.stdout.write(content);
  }
  ```
</CodeGroup>

## Generate images

<CodeGroup>
  ```python Python theme={null}
  response = client.images.generate(
      model="gemini-3-pro-image",
      prompt="A serene mountain landscape at sunset with dramatic clouds",
      n=1,
      size="1024x1024"
  )

  print(response.data[0].url)
  ```

  ```typescript JavaScript / TypeScript theme={null}
  const response = await client.images.generate({
    model: "gemini-3-pro-image",
    prompt: "A serene mountain landscape at sunset with dramatic clouds",
    n: 1,
    size: "1024x1024",
  });

  console.log(response.data[0].url);
  ```

  ```go Go theme={null}
  response, err := client.Images.Generate(context.Background(), openai.ImageGenerateParams{
      Model:  openai.F("gemini-3-pro-image"),
      Prompt: openai.F("A serene mountain landscape at sunset with dramatic clouds"),
      N:      openai.Int(1),
      Size:   openai.F(openai.ImageGenerateParamsSize1024x1024),
  })
  if err != nil {
      panic(err)
  }

  fmt.Println(response.Data[0].URL)
  ```

  ```csharp C# / .NET theme={null}
  GeneratedImage image = await imageClient.GenerateImageAsync(
      "A serene mountain landscape at sunset with dramatic clouds",
      new ImageGenerationOptions
      {
          Size = GeneratedImageSize.W1024xH1024,
          Quality = GeneratedImageQuality.Standard
      }
  );

  Console.WriteLine(image.ImageUri);
  ```

  ```java Java theme={null}
  ImagesResponse response = client.images().generate(ImageGenerateParams.builder()
      .model("gemini-3-pro-image")
      .prompt("A serene mountain landscape at sunset with dramatic clouds")
      .n(1)
      .size(ImageGenerateParams.Size._1024X1024)
      .build());

  System.out.println(response.data().get(0).url().orElse(null));
  ```

  ```ruby Ruby theme={null}
  response = client.images.generate(
    model: "gemini-3-pro-image",
    prompt: "A serene mountain landscape at sunset with dramatic clouds",
    size: "1024x1024"
  )

  puts response.data.first.url
  ```
</CodeGroup>

## Edit images

<CodeGroup>
  ```python Python theme={null}
  response = client.images.edit(
      model="gpt-image-1.5",
      image=open("original.png", "rb"),
      prompt="Add a rainbow in the sky",
      n=1,
      size="1024x1024"
  )

  print(response.data[0].url)
  ```

  ```typescript JavaScript / TypeScript theme={null}
  import fs from "fs";

  const response = await client.images.edit({
    model: "gpt-image-1.5",
    image: fs.createReadStream("original.png"),
    prompt: "Add a rainbow in the sky",
    n: 1,
    size: "1024x1024",
  });

  console.log(response.data[0].url);
  ```

  ```go Go theme={null}
  imageFile, _ := os.Open("original.png")
  defer imageFile.Close()

  response, err := client.Images.Edit(context.Background(), openai.ImageEditParams{
      Model:  openai.F("gpt-image-1.5"),
      Image:  openai.F[openai.ImageEditParamsImageUnion](openai.NewImageFile("original.png", imageFile)),
      Prompt: openai.F("Add a rainbow in the sky"),
      N:      openai.Int(1),
      Size:   openai.F(openai.ImageEditParamsSize1024x1024),
  })
  if err != nil {
      panic(err)
  }

  fmt.Println(response.Data[0].URL)
  ```

  ```csharp C# / .NET theme={null}
  var imageClient = client.GetImageClient("gpt-image-1.5");

  using var imageStream = File.OpenRead("original.png");

  GeneratedImage editedImage = await imageClient.GenerateImageEditAsync(
      imageStream,
      "original.png",
      "Add a rainbow in the sky",
      new ImageEditOptions
      {
          Size = GeneratedImageSize.W1024xH1024
      }
  );

  Console.WriteLine(editedImage.ImageUri);
  ```

  ```java Java theme={null}
  InputStream imageStream = Files.newInputStream(Path.of("original.png"));

  ImagesResponse response = client.images().edit(ImageEditParams.builder()
      .model("gpt-image-1.5")
      .image(imageStream)
      .prompt("Add a rainbow in the sky")
      .n(1)
      .size(ImageEditParams.Size._1024X1024)
      .build());

  System.out.println(response.data().get(0).url().orElse(null));
  ```

  ```ruby Ruby theme={null}
  response = client.images.edit(
    model: "gpt-image-1.5",
    image: Pathname("original.png"),
    prompt: "Add a rainbow in the sky",
    size: "1024x1024"
  )

  puts response.data.first.url
  ```
</CodeGroup>

## Generate videos

Video generation is asynchronous. Submit a request with `client.videos.create()`, then poll with `client.videos.retrieve()` until the video is ready.

<CodeGroup>
  ```python Python theme={null}
  import time

  # Submit a video generation request
  video = client.videos.create(
      model="sora-2",
      prompt="A capybara splashing in a river at golden hour",
      seconds=5,
      size="1920x1080",
  )

  # Poll until the video is ready
  while video.status not in ("completed", "failed"):
      time.sleep(5)
      video = client.videos.retrieve(video.id)

  print(video.output.url)
  ```

  ```typescript JavaScript / TypeScript theme={null}
  // Submit a video generation request
  let video = await client.videos.create({
    model: "sora-2",
    prompt: "A capybara splashing in a river at golden hour",
    seconds: 5,
    size: "1920x1080",
  });

  // Poll until the video is ready
  while (video.status !== "completed" && video.status !== "failed") {
    await new Promise((r) => setTimeout(r, 5000));
    video = await client.videos.retrieve(video.id);
  }

  console.log(video.output.url);
  ```
</CodeGroup>

### Video generation options

| Parameter      | Type             | Default  | Description                                                         |
| -------------- | ---------------- | -------- | ------------------------------------------------------------------- |
| `model`        | string           | required | Model ID (e.g., `sora-2`)                                           |
| `prompt`       | string           | required | Text description of the desired video                               |
| `seconds`      | string or number | varies   | Duration of the video in seconds                                    |
| `size`         | string           | varies   | Video dimensions (e.g., `1920x1080`) or aspect ratio (e.g., `16:9`) |
| `n`            | integer          | `1`      | Number of videos to generate (1-4)                                  |
| `aspect_ratio` | string           | -        | Aspect ratio (e.g., `16:9`, `9:16`)                                 |
| `resolution`   | string           | -        | Resolution shorthand (`720p`, `1080p`)                              |
| `input_image`  | string           | -        | URL of image for image-to-video generation                          |
| `webhook_url`  | string           | -        | URL for completion notification                                     |

## Environment variables

All SDKs support environment variables for configuration:

```bash theme={null}
export OPENAI_API_KEY="lmnfl_your_api_key"
export OPENAI_BASE_URL="https://api.lumenfall.ai/openai/v1"
```

<Warning>
  Store your API key in environment variables rather than hardcoding it in your source code. Never commit API keys to version control.
</Warning>

## Image generation options

| Parameter         | Type    | Default     | Description                                                          |
| ----------------- | ------- | ----------- | -------------------------------------------------------------------- |
| `model`           | string  | required    | Model ID (e.g., `gemini-3-pro-image`, `gpt-image-1.5`, `flux.2-max`) |
| `prompt`          | string  | required    | Text description of the desired image                                |
| `n`               | integer | `1`         | Number of images to generate (1-10)                                  |
| `size`            | string  | `1024x1024` | Image dimensions                                                     |
| `quality`         | string  | `standard`  | Image quality (`standard` or `hd`)                                   |
| `response_format` | string  | `url`       | Response format (`url` or `b64_json`)                                |
| `style`           | string  | `vivid`     | Image style (`vivid` or `natural`)                                   |

## Passing additional parameters

Lumenfall passes through any additional parameters to the upstream provider. This allows you to use provider-specific features that aren't part of the standard OpenAI API.

<CodeGroup>
  ```python Python theme={null}
  response = client.images.generate(
      model="gemini-3-pro-image",
      prompt="A capybara relaxing in a hot spring",
      size="1024x1024",
      extra_body={
          "seed": 12345,
          "custom_provider_param": "value"
      }
  )
  ```

  ```typescript JavaScript / TypeScript theme={null}
  const response = await client.images.generate({
    model: "gemini-3-pro-image",
    prompt: "A capybara relaxing in a hot spring",
    size: "1024x1024",
    // @ts-expect-error Provider-specific parameter
    seed: 12345,
    // @ts-expect-error Provider-specific parameter
    custom_provider_param: "value",
  });
  ```

  ```go Go theme={null}
  params := openai.ImageGenerateParams{
      Model:  openai.F("gemini-3-pro-image"),
      Prompt: openai.F("A capybara relaxing in a hot spring"),
      Size:   openai.F(openai.ImageGenerateParamsSize1024x1024),
  }
  params.SetExtraFields(map[string]any{
      "seed": 12345,
      "custom_provider_param": "value",
  })

  response, err := client.Images.Generate(context.Background(), params)
  ```

  ```csharp C# / .NET theme={null}
  var options = new ImageGenerationOptions();
  options.Patch.Set("$.seed"u8, 12345);
  options.Patch.Set("$.custom_provider_param"u8, "value");

  GeneratedImage image = await imageClient.GenerateImageAsync(
      "A capybara relaxing in a hot spring",
      options
  );
  ```

  ```java Java theme={null}
  ImagesResponse response = client.images().generate(ImageGenerateParams.builder()
      .model("gemini-3-pro-image")
      .prompt("A capybara relaxing in a hot spring")
      .size(ImageGenerateParams.Size._1024X1024)
      .putAdditionalBodyProperty("seed", JsonValue.from(12345))
      .putAdditionalBodyProperty("custom_provider_param", JsonValue.from("value"))
      .build());
  ```

  ```ruby Ruby theme={null}
  response = client.images.generate(
    model: "gemini-3-pro-image",
    prompt: "A capybara relaxing in a hot spring",
    size: "1024x1024",
    request_options: {
      extra_body: {
        seed: 12345,
        custom_provider_param: "value"
      }
    }
  )
  ```
</CodeGroup>

<Note>
  Additional parameters are passed directly to the provider. Check the provider's documentation for supported parameters. Unsupported parameters may be silently ignored.
</Note>

## Next steps

<CardGroup cols={2}>
  <Card title="API Reference" icon="code" href="/api-reference/introduction">
    Explore the full API documentation.
  </Card>

  <Card title="Available Models" icon="cube" href="/models">
    See all available models.
  </Card>
</CardGroup>
