0.01
The project is in a healthy, maintained state
A Ruby SDK for OpenAI's ChatGPT API
2005
2006
2007
2008
2009
2010
2011
2012
2013
2014
2015
2016
2017
2018
2019
2020
2021
2022
2023
2024
2025
 Dependencies

Development

Runtime

 Project Readme

ChatGPT Ruby

Gem Version License Maintainability Test Coverage CI GitHub stars

🤖💎 A comprehensive Ruby SDK for OpenAI's GPT APIs, providing a robust, feature-rich interface for AI-powered applications.

📚 Check out the Integration Guide to get started!

Features

  • 🚀 Full support for GPT-3.5-Turbo and GPT-4 models
  • 📡 Streaming responses support
  • 🔧 Function calling and JSON mode
  • 🎨 DALL-E image generation
  • 🔄 Fine-tuning capabilities
  • 📊 Token counting and validation
  • ⚡ Async operations support
  • 🛡️ Built-in rate limiting and retries
  • 🎯 Type-safe responses
  • 📝 Comprehensive logging

Table of Contents

  • Features
  • Installation
  • Quick Start
  • Configuration
  • Core Features
    • Chat Completions
    • Function Calling
    • Image Generation (DALL-E)
    • Fine-tuning
    • Token Management
    • Error Handling
  • Advanced Usage
    • Async Operations
    • Batch Operations
    • Response Objects
  • Development
  • Contributing
  • License

Installation

Add to your Gemfile:

gem 'chatgpt-ruby'

Or install directly:

$ gem install chatgpt-ruby

Quick Start

require 'chatgpt'

# Initialize with API key
client = ChatGPT::Client.new(ENV['OPENAI_API_KEY'])

# Chat API (Recommended for GPT-3.5-turbo, GPT-4)
response = client.chat([
  { role: "user", content: "What is Ruby?" }
])

puts response.dig("choices", 0, "message", "content")

# Completions API (For GPT-3.5-turbo-instruct)
response = client.completions("What is Ruby?")
puts response.dig("choices", 0, "text")

Configuration

ChatGPT.configure do |config|
  config.api_key = ENV['OPENAI_API_KEY']
  config.api_version = 'v1'
  config.default_engine = 'gpt-3.5-turbo'
  config.request_timeout = 30
  config.max_retries = 3
  config.default_parameters = {
    max_tokens: 16,
    temperature: 0.5,
    top_p: 1.0,
    n: 1
  }
end

Core Features

Chat Completions

# Chat with system message
response = client.chat([
  { role: "system", content: "You are a helpful assistant." },
  { role: "user", content: "Hello!" }
])

# With streaming
client.chat_stream([
  { role: "user", content: "Tell me a story" }
]) do |chunk|
  print chunk.dig("choices", 0, "delta", "content")
end

Function Calling

functions = [
  {
    name: "get_weather",
    description: "Get current weather",
    parameters: {
      type: "object",
      properties: {
        location: { type: "string" },
        unit: { type: "string", enum: ["celsius", "fahrenheit"] }
      }
    }
  }
]

response = client.chat(
  messages: [{ role: "user", content: "What's the weather in London?" }],
  functions: functions,
  function_call: "auto"
)

Image Generation (DALL-E)

# Generate image
image = client.images.generate(
  prompt: "A sunset over mountains",
  size: "1024x1024",
  quality: "hd"
)

# Create variations
variation = client.images.create_variation(
  image: File.read("input.png"),
  n: 1
)

Fine-tuning

# Create fine-tuning job
job = client.fine_tunes.create(
  training_file: "file-abc123",
  model: "gpt-3.5-turbo"
)

# List fine-tuning jobs
jobs = client.fine_tunes.list

# Get job status
status = client.fine_tunes.retrieve(job.id)

Token Management

# Count tokens
count = client.tokens.count("Your text here", model: "gpt-4")

# Validate token limits
client.tokens.validate_messages(messages, max_tokens: 4000)

Error Handling

begin
  response = client.chat(messages: [...])
rescue ChatGPT::RateLimitError => e
  puts "Rate limit hit: #{e.message}"
rescue ChatGPT::APIError => e
  puts "API error: #{e.message}"
rescue ChatGPT::TokenLimitError => e
  puts "Token limit exceeded: #{e.message}"
end

Advanced Usage

Async Operations

client.async do
  response1 = client.chat(messages: [...])
  response2 = client.chat(messages: [...])
  [response1, response2]
end

Batch Operations

responses = client.batch do |batch|
  batch.add_chat(messages: [...])
  batch.add_chat(messages: [...])
end

Response Objects

response = client.chat(messages: [...])

response.content  # Main response content
response.usage   # Token usage information
response.finish_reason  # Why the response ended
response.model   # Model used

Development

# Run tests
bundle exec rake test

# Run linter
bundle exec rubocop

# Generate documentation
bundle exec yard doc

Contributing

  1. Fork it
  2. Create your feature branch (git checkout -b feature/my-new-feature)
  3. Add tests for your feature
  4. Make your changes
  5. Commit your changes (git commit -am 'Add some feature')
  6. Push to the branch (git push origin feature/my-new-feature)
  7. Create a new Pull Request

License

Released under the MIT License. See LICENSE for details.