A delightful Ruby way to work with AI. No configuration madness, no complex callbacks, no handler hell â just beautiful, expressive Ruby code.
ðĪš Battle tested at ðŽ Chat with Work
The problem with AI libraries
Every AI provider comes with its own client library, its own response format, its own conventions for streaming, and its own way of handling errors. Want to use multiple providers? Prepare to juggle incompatible APIs and bloated dependencies.
RubyLLM fixes all that. One beautiful API for everything. One consistent format. Minimal dependencies â just Faraday and Zeitwerk. Because working with AI should be a joy, not a chore.
Features
- ðŽ Chat with OpenAI, Anthropic, Gemini, AWS Bedrock Anthropic, and DeepSeek models
- ðïļ Vision and Audio understanding
- ð PDF Analysis for analyzing documents
- ðžïļ Image generation with DALL-E and other providers
- ð Embeddings for vector search and semantic analysis
- ð§ Tools that let AI use your Ruby code
- ð Rails integration to persist chats and messages with ActiveRecord
- ð Streaming responses with proper Ruby patterns
What makes it great
# Just ask questions
chat = RubyLLM.chat
chat.ask "What's the best way to learn Ruby?"
# Analyze images
chat.ask "What's in this image?", with: { image: "ruby_conf.jpg" }
# Analyze audio recordings
chat.ask "Describe this meeting", with: { audio: "meeting.wav" }
# Analyze documents
chat.ask "Summarize this document", with: { pdf: "contract.pdf" }
# Stream responses in real-time
chat.ask "Tell me a story about a Ruby programmer" do |chunk|
print chunk.content
end
# Generate images
RubyLLM.paint "a sunset over mountains in watercolor style"
# Create vector embeddings
RubyLLM.embed "Ruby is elegant and expressive"
# Let AI use your code
class Weather < RubyLLM::Tool
description "Gets current weather for a location"
param :latitude, desc: "Latitude (e.g., 52.5200)"
param :longitude, desc: "Longitude (e.g., 13.4050)"
def execute(latitude:, longitude:)
url = "https://api.open-meteo.com/v1/forecast?latitude=#{latitude}&longitude=#{longitude}¤t=temperature_2m,wind_speed_10m"
response = Faraday.get(url)
data = JSON.parse(response.body)
rescue => e
{ error: e.message }
end
end
chat.with_tool(Weather).ask "What's the weather in Berlin? (52.5200, 13.4050)"
Installation
# In your Gemfile
gem 'ruby_llm'
# Then run
bundle install
# Or install it yourself
gem install ruby_llm
Configure with your API keys:
RubyLLM.configure do |config|
config.openai_api_key = ENV.fetch('OPENAI_API_KEY', nil)
config.anthropic_api_key = ENV.fetch('ANTHROPIC_API_KEY', nil)
config.gemini_api_key = ENV.fetch('GEMINI_API_KEY', nil)
config.deepseek_api_key = ENV.fetch('DEEPSEEK_API_KEY', nil)
# Bedrock
config.bedrock_api_key = ENV.fetch('AWS_ACCESS_KEY_ID', nil)
config.bedrock_secret_key = ENV.fetch('AWS_SECRET_ACCESS_KEY', nil)
config.bedrock_region = ENV.fetch('AWS_REGION', nil)
config.bedrock_session_token = ENV.fetch('AWS_SESSION_TOKEN', nil)
end
Have great conversations
# Start a chat with the default model (gpt-4.1-nano)
chat = RubyLLM.chat
# Or specify what you want
chat = RubyLLM.chat(model: 'claude-3-7-sonnet-20250219')
# Simple questions just work
chat.ask "What's the difference between attr_reader and attr_accessor?"
# Multi-turn conversations are seamless
chat.ask "Could you give me an example?"
# Stream responses in real-time
chat.ask "Tell me a story about a Ruby programmer" do |chunk|
print chunk.content
end
# Set personality or behavior with instructions (aka system prompts)
chat.with_instructions "You are a friendly Ruby expert who loves to help beginners"
# Understand content in multiple forms
chat.ask "Compare these diagrams", with: { image: ["diagram1.png", "diagram2.png"] }
chat.ask "Summarize this document", with: { pdf: "contract.pdf" }
chat.ask "What's being said?", with: { audio: "meeting.wav" }
# Need a different model mid-conversation? No problem
chat.with_model('gemini-2.0-flash').ask "What's your favorite algorithm?"
Rails integration that makes sense
# app/models/chat.rb
class Chat < ApplicationRecord
acts_as_chat
# Works great with Turbo
broadcasts_to ->(chat) { "chat_#{chat.id}" }
end
# app/models/message.rb
class Message < ApplicationRecord
acts_as_message
end
# app/models/tool_call.rb
class ToolCall < ApplicationRecord
acts_as_tool_call
end
# In a background job
chat = Chat.create! model_id: "gpt-4.1-nano"
# Set personality or behavior with instructions (aka system prompts) - they're persisted too!
chat.with_instructions "You are a friendly Ruby expert who loves to help beginners"
chat.ask("What's your favorite Ruby gem?") do |chunk|
Turbo::StreamsChannel.broadcast_append_to(
chat,
target: "response",
partial: "messages/chunk",
locals: { chunk: chunk }
)
end
# That's it - chat history is automatically saved
Creating tools is a breeze
class Search < RubyLLM::Tool
description "Searches a knowledge base"
param :query, desc: "The search query"
param :limit, type: :integer, desc: "Max results", required: false
def execute(query:, limit: 5)
# Your search logic here
Document.search(query).limit(limit).map(&:title)
end
end
# Let the AI use it
chat.with_tool(Search).ask "Find documents about Ruby 3.3 features"
Learn more
Check out the guides at https://rubyllm.com for deeper dives into conversations with tools, streaming responses, embedding generations, and more.
Contributing
We welcome contributions to RubyLLM!
See CONTRIBUTING.md for detailed instructions on how to:
- Run the test suite
- Add new features
- Update documentation
- Re-record VCR cassettes when needed
We appreciate your help making RubyLLM better!
License
Released under the MIT License.