0.0
The project is in a healthy, maintained state
An implementation of OmniAI for Mistral
2005
2006
2007
2008
2009
2010
2011
2012
2013
2014
2015
2016
2017
2018
2019
2020
2021
2022
2023
2024
 Dependencies
 Project Readme

OmniAI::Mistral

LICENSE RubyGems GitHub Yard CircleCI

An Mistral implementation of the OmniAI APIs.

Installation

gem install omniai-mistral

Usage

Client

A client is setup as follows if ENV['MISTRAL_API_KEY'] exists:

client = OmniAI::Mistral::Client.new

A client may also be passed the following options:

  • api_key (required - default is ENV['MISTRAL_API_KEY'])
  • host (optional)

Configuration

Global configuration is supported for the following options:

OmniAI::Mistral.configure do |config|
  config.api_key = 'sk-...' # default: ENV['MISTRAL_API_KEY']
  config.host = '...' # default: 'https://api.mistral.ai'
end

Chat

A chat completion is generated by passing in prompts using any a variety of formats:

completion = client.chat('Tell me a joke!')
completion.text # 'Why did the chicken cross the road? To get to the other side.'
completion = client.chat do |prompt|
  prompt.system('You are a helpful assistant.')
  prompt.user('What is the capital of Canada?')
end
completion.text # 'The capital of Canada is Ottawa.'

Model

model takes an optional string (default is mistral-medium-latest):

completion = client.chat('Provide code for fibonacci', model: OmniAI::Mistral::Chat::Model::CODESTRAL)
completion.text # 'def fibonacci(n)...end'

Mistral API Reference model

Temperature

temperature takes an optional float between 0.0 and 1.0 (defaults is 0.7):

completion = client.chat('Pick a number between 1 and 5', temperature: 1.0)
completion.text # '3'

Mistral API Reference temperature

Stream

stream takes an optional a proc to stream responses in real-time chunks instead of waiting for a complete response:

stream = proc do |chunk|
  print(chunk.text) # 'Better', 'three', 'hours', ...
end
client.chat('Be poetic.', stream:)

Mistral API Reference stream

Format

format takes an optional symbol (:json) and that sets the response_format to json_object:

completion = client.chat(format: :json) do |prompt|
  prompt.system(OmniAI::Chat::JSON_PROMPT)
  prompt.user('What is the name of the drummer for the Beatles?')
end
JSON.parse(completion.text) # { "name": "Ringo" }

Mistral API Reference response_format

When using JSON mode you MUST also instruct the model to produce JSON yourself with a system or a user message.

Embed

Text can be converted into a vector embedding for similarity comparison usage via:

response = client.embed('The quick brown fox jumps over a lazy dog.')
response.embedding # [0.0, ...]