Skip to content

EllinghamTech/ai_toolkit

Repository files navigation

AiToolkit

AiToolkit provides a simple Ruby DSL for interacting with Anthropic's Claude models. It can talk directly to the Claude HTTP API or via AWS Bedrock.

Add this line to your application's Gemfile:

gem 'ai_toolkit'

And then execute:

bundle install

Or install it yourself as:

gem install ai_toolkit
client = AiToolkit::Client.new(provider)
response = client.request do |c|
  c.system_prompt 'My Prompt'
  c.message :user, 'Hello'
  c.tool :example_tool, {}
end

For Claude built-in server side tools, just provide the tool name and any configuration options:

client.request do |c|
  c.tool :web_search, max_uses: 3, allowed_domains: ['example.com']
end

The response object exposes the stop reason and a chronologically ordered list of results via #results. Each element of this array is one of three objects: AiToolkit::MessageResult for LLM messages, AiToolkit::ToolRequest for tool calls requested by the LLM, and AiToolkit::ToolResponse for the data returned back to the model from executed tools. Each response also provides #execution_time, the number of seconds spent performing the LLM call.

Requests automatically loop through tool calls. A basic request looks like:

response = client.request do |c|
  c.system_prompt 'My Prompt'
  c.message :user, 'First message'
  c.tool MyToolObject
end

You can override the maximum number of tokens sent to the provider and the iteration limit for tool looping. A specific tool can be forced by passing a tool_choice hash:

client.request(max_tokens: 2048, max_iterations: 10,
               tool_choice: { type: 'tool', name: 'example_tool' }) do |c|
  c.message :user, 'Hello'
end

Additional generation options like temperature, top_k, and top_p can also be specified and are passed directly through to the provider:

client.request(temperature: 0.2, top_k: 5, top_p: 0.9) do |c|
  c.message :user, 'Hello'
end

A tool may terminate further LLM calls by raising AiToolkit::StopToolLoop from #perform.

See lib/ai_toolkit/providers/claude.rb and lib/ai_toolkit/providers/bedrock.rb for the provider implementations. A simple fake provider for testing is available in lib/ai_toolkit/providers/fake.rb.

To use the Bedrock provider you need the aws-sdk-bedrockruntime gem. This gem is not included in ai_toolkit's runtime dependencies, so install it separately when required:

gem install aws-sdk-bedrockruntime

When building the Docker image for testing with docker-compose, the gem is installed automatically via Bundler's docker group:

docker-compose build
docker-compose up

Run the test suite with:

rake test

Hooks

AiToolkit::Client allows registering callbacks before and after each provider call:

client.before_request do |req, model:, provider:|
  # inspect or modify `req`
end

client.after_request do |req, res, model:, provider:|
  # inspect request and response
end

The before hook may modify the request hash. Errors raised by the before hook propagate and abort the LLM request. Errors raised in the after hook are swallowed but will stop any automatic tool loop.

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages