> ## Documentation Index
> Fetch the complete documentation index at: https://docs.lumenfall.ai/llms.txt
> Use this file to discover all available pages before exploring further.

# Ruby (RubyLLM / ruby-openai)

> Use RubyLLM or ruby-openai with Lumenfall

Besides the offical [OpenAI SDK for Ruby](/client-libraries/openai-sdk), there are two popular community Ruby libraries for working with OpenAI-compatible APIs: [RubyLLM](https://rubyllm.com) and [ruby-openai](https://github.com/alexrudall/ruby-openai). Both work seamlessly with Lumenfall.

## RubyLLM

[RubyLLM](https://rubyllm.com) is currently the most popular Ruby gem for AI interactions according to GitHub stars. It provides a clean, Ruby-idiomatic interface with a beautiful DSL.

### Installation

Add to your Gemfile:

```ruby theme={null}
gem "ruby_llm"
```

Then run:

```bash theme={null}
bundle install
```

### Configuration

<Warning>
  When using RubyLLM with Lumenfall, you **must** include two parameters in every image generation call (see below for how to include them):

  * `provider: "openai"` - Tells RubyLLM to always route to Lumenfall's OpenAI-compatible API
  * `assume_model_exists: true` - Bypasses RubyLLM's model registry check to allow models that they don't officially support

  Without these parameters, RubyLLM may attempt to route requests to other providers and the call will fail.
</Warning>

#### Global configuration

Configure the API credentials globally:

```ruby theme={null}
require "ruby_llm"

RubyLLM.configure do |config|
  config.openai_api_key = ENV["LUMENFALL_API_KEY"]
  config.openai_api_base = "https://api.lumenfall.ai/openai/v1"
end
```

#### Per-call configuration

You can also configure RubyLLM per-call without global setup, using `RubyLLM.context`:

```ruby theme={null}
require "ruby_llm"

ctx = RubyLLM.context do |config|
  config.openai_api_key = ENV["LUMENFALL_API_KEY"]
  config.openai_api_base = "https://api.lumenfall.ai/openai/v1"
end

image = ctx.paint(
  "A capybara relaxing in a hot spring",
  model: "gemini-3-pro-image",
  size: "1024x1024",
  provider: "openai",
  assume_model_exists: true
)

puts image.url
```

This is useful when you need to make multiple calls with the same configuration or when you need to use different API credentials in different parts of your application.

### Generate images

Use the `paint` method with the required Lumenfall parameters:

```ruby theme={null}
image = RubyLLM.paint(
  "A serene mountain landscape at sunset with dramatic clouds",
  model: "gemini-3-pro-image",
  size: "1024x1024",
  provider: "openai",
  assume_model_exists: true
)

puts image.url
```

### Image editing

RubyLLM does not support image editing. This is a known limitation tracked in [GitHub issue #512](https://github.com/crmne/ruby_llm/issues/512).

To edit images with Lumenfall, use an alternative SDK or make a HTTP call.

***

## ruby-openai

[ruby-openai](https://github.com/alexrudall/ruby-openai) is a featureful community-maintained Ruby client that closely follows the OpenAI API structure. It is not the official OpenAI Ruby SDK.

### Installation

Add to your Gemfile:

```ruby theme={null}
gem "ruby-openai"
```

Then run:

```bash theme={null}
bundle install
```

### Configuration

```ruby theme={null}
require "openai"

client = OpenAI::Client.new(
  access_token: ENV["LUMENFALL_API_KEY"],
  uri_base: "https://api.lumenfall.ai/openai/v1"
)
```

Or configure globally:

```ruby theme={null}
OpenAI.configure do |config|
  config.access_token = ENV["LUMENFALL_API_KEY"]
  config.uri_base = "https://api.lumenfall.ai/openai/v1"
end

client = OpenAI::Client.new
```

### Generate images

```ruby theme={null}
response = client.images.generate(
  parameters: {
    model: "gemini-3-pro-image",
    prompt: "A capybara relaxing in a hot spring",
    size: "1024x1024",
    n: 1
  }
)

puts response.dig("data", 0, "url")
```

#### Generate multiple images

```ruby theme={null}
response = client.images.generate(
  parameters: {
    model: "gpt-image-1.5",
    prompt: "A capybara relaxing in a hot spring",
    size: "1024x1024",
    n: 2
  }
)

response.dig("data").each_with_index do |image, index|
  puts "Image #{index + 1}: #{image["url"]}"
end
```

#### Get base64 response

```ruby theme={null}
response = client.images.generate(
  parameters: {
    model: "gpt-image-1.5",
    prompt: "A capybara in a forest",
    response_format: "b64_json"
  }
)

base64_image = response.dig("data", 0, "b64_json")
```

### Edit images

```ruby theme={null}
response = client.images.edit(
  parameters: {
    model: "gpt-image-1.5",
    image: File.open("original.png", "rb"),
    prompt: "Add a capybara to this image",
    size: "1024x1024",
    n: 1
  }
)

puts response.dig("data", 0, "url")
```

### List models

```ruby theme={null}
models = client.models.list

models["data"].each do |model|
  puts model["id"]
end
```

### Retrieve a specific model

```ruby theme={null}
model = client.models.retrieve(id: "gemini-3-pro-image")

puts model["id"]
puts model["object"]
```

## Next steps

<CardGroup cols={2}>
  <Card title="API Reference" icon="code" href="/api-reference/introduction">
    Explore the full API documentation.
  </Card>

  <Card title="Available Models" icon="cube" href="/api-reference/models/list">
    See all available image generation models.
  </Card>
</CardGroup>
