An intelligent multi-agent AI assistant for managing Hatch projects using natural language.
hatch-agent uses a sophisticated multi-agent approach powered by strands-agents to help you manage your Hatch Python projects. Instead of a single AI making decisions, it employs:
- 2 Specialist Agents: Generate different approaches to your problem
- ConfigurationSpecialist: Expert in pyproject.toml and dependencies
- WorkflowSpecialist: Expert in testing, formatting, and CI/CD
- 1 Judge Agent: Evaluates suggestions using a consistent scoring framework
This ensures you get well-reasoned, reliable recommendations.
pip install hatch-agentBefore using hatch-agent, you need an active account with one of the supported LLM providers. This tool does not include any LLM services - you must bring your own API credentials and pay for usage according to your provider's pricing.
Create a configuration file at ~/.config/hatch-agent/config.toml or in your project root as .hatch-agent.toml.
- Get your API key: Sign up at platform.openai.com and create an API key
- Set up billing: Add a payment method in your OpenAI account settings
- Configure hatch-agent:
mode = "multi-agent"
underlying_provider = "openai"
model = "gpt-4" # or "gpt-3.5-turbo" for lower cost
[underlying_config]
api_key = "sk-..." # Your OpenAI API keyAlternative: Use environment variable
export OPENAI_API_KEY="sk-..."Then in config:
mode = "multi-agent"
underlying_provider = "openai"
model = "gpt-4"
# underlying_config.api_key will be read from environment- Get your API key: Sign up at console.anthropic.com
- Set up billing: Add payment method and purchase credits
- Configure hatch-agent:
mode = "multi-agent"
underlying_provider = "anthropic"
model = "claude-3-opus-20240229" # or "claude-3-sonnet-20240229"
[underlying_config]
api_key = "sk-ant-..." # Your Anthropic API keyAlternative: Use environment variable
export ANTHROPIC_API_KEY="sk-ant-..."- Set up AWS Account: Ensure you have an AWS account with Bedrock access enabled
- Request model access: In AWS Console, go to Bedrock and request access to desired models
- Create IAM credentials: Create an IAM user with Bedrock permissions
- Configure hatch-agent:
mode = "multi-agent"
underlying_provider = "bedrock"
model = "anthropic.claude-v2" # or other Bedrock model
[underlying_config]
aws_access_key_id = "AKIA..."
aws_secret_access_key = "..."
region = "us-east-1" # Your AWS regionAlternative: Use AWS credentials file or environment
# Configure AWS CLI or set environment variables
export AWS_ACCESS_KEY_ID="AKIA..."
export AWS_SECRET_ACCESS_KEY="..."
export AWS_DEFAULT_REGION="us-east-1"Then in config:
mode = "multi-agent"
underlying_provider = "bedrock"
model = "anthropic.claude-v2"
# Credentials will be read from environment/AWS config- Set up Azure OpenAI resource: Create an Azure OpenAI service in Azure Portal
- Deploy a model: Deploy a model like GPT-4 to get a deployment name
- Get credentials: Find your API key and endpoint in Azure Portal
- Configure hatch-agent:
mode = "multi-agent"
underlying_provider = "azure"
model = "gpt-4"
[underlying_config]
api_key = "..." # Your Azure OpenAI key
api_base = "https://your-resource.openai.azure.com/"
api_version = "2024-02-15-preview"
deployment = "your-gpt4-deployment" # Your deployment name- Set up Google Cloud Project: Enable Vertex AI API
- Set up authentication: Create a service account and download credentials
- Configure hatch-agent:
mode = "multi-agent"
underlying_provider = "google"
model = "gemini-pro"
[underlying_config]
project_id = "your-project-id"
location = "us-central1"
# Set GOOGLE_APPLICATION_CREDENTIALS env var to path of credentials JSONSet credentials:
export GOOGLE_APPLICATION_CREDENTIALS="/path/to/service-account-key.json"- Get your API key: Sign up at cohere.com
- Set up billing: Add payment method
- Configure hatch-agent:
mode = "multi-agent"
underlying_provider = "cohere"
model = "command"
[underlying_config]
api_key = "..." # Your Cohere API keyhatch-agent looks for configuration files in this order:
.hatch-agent.tomlin current directory~/.config/hatch-agent/config.toml(Linux/macOS)~/Library/Application Support/hatch-agent/config.toml(macOS)%APPDATA%\hatch-agent\config.toml(Windows)
You can also specify a custom config file with --config flag.
You are responsible for all API costs incurred. Here are approximate costs (as of 2025):
- OpenAI GPT-4: ~$0.03-0.06 per 1K tokens (input/output)
- OpenAI GPT-3.5: ~$0.001-0.002 per 1K tokens (much cheaper)
- Anthropic Claude 3 Opus: ~$0.015-0.075 per 1K tokens
- Anthropic Claude 3 Sonnet: ~$0.003-0.015 per 1K tokens (good balance)
- AWS Bedrock: Varies by model, similar to above
- Azure OpenAI: Same as OpenAI pricing plus Azure markup
Tip: Start with GPT-3.5-turbo or Claude 3 Sonnet for testing, then upgrade to GPT-4 or Claude 3 Opus for production use if needed.
- Use environment variables for sensitive credentials
- Add
.hatch-agent.tomlto your.gitignore - Use different API keys for development and production
- Rotate keys regularly
- Set usage limits in your provider's dashboard
See config.example.toml for more provider examples and configuration options.
Get AI-powered analysis of why your build failed, including test failures, formatting issues, and type checking errors.
# Analyze build failures in current directory
hatch-agent-explain
# Specify project directory
hatch-agent-explain --project-root /path/to/project
# Show all agent suggestions
hatch-agent-explain --show-allWhat it does:
- Runs your tests via
hatch run test - Checks code formatting
- Checks type hints
- Analyzes all failures with AI agents
- Provides actionable recommendations
Add dependencies to your project using plain English - the AI will determine the exact package, version, and location in pyproject.toml.
# Add a dependency
hatch-agent-add-dep add requests for http client
# Add to dev dependencies
hatch-agent-add-dep add pytest to dev dependencies
# Specify version
hatch-agent-add-dep I need pandas version 2.0 or higher
# Preview without making changes
hatch-agent-add-dep add numpy --dry-run
# Skip environment sync
hatch-agent-add-dep add flask --skip-syncWhat it does:
- Parses your natural language request
- Determines the correct package name and version
- Identifies whether it should be main or optional dependency
- Modifies
pyproject.tomlcorrectly - Syncs Hatch environment to install the package
Update dependencies to newer versions and automatically adapt your code to API changes with strict minimal change guidelines.
# Update to latest version
hatch-agent-update-dep requests --version latest
# Update to specific version
hatch-agent-update-dep pydantic --version ">=2.0.0"
# Preview changes without applying
hatch-agent-update-dep django --version 5.0.0 --dry-run
# Update without code changes (pyproject.toml only)
hatch-agent-update-dep flask --version 3.0.0 --no-code-changes
# Show all agent suggestions
hatch-agent-update-dep pandas --version 2.1.0 --show-allWhat it does:
- Updates the dependency version in
pyproject.toml - Uses specialized AI agents to analyze API changes between versions
- Identifies breaking changes that require code modifications
- Generates minimal, necessary code changes only
- Syncs Hatch environment to install the new version
Strict Code Change Guidelines:
The update command uses specialized agents with extremely strict rules:
- ✅ ONLY changes required for API compatibility
- ✅ Updates import statements if API moved
- ✅ Changes method names if renamed in new version
- ✅ Adjusts parameters if signature changed
- ❌ NO refactoring or code improvements
- ❌ NO additional features or complexity
- ❌ NO style or formatting changes
- ❌ NO changes to unrelated code
The judge agent specifically scores solutions on minimalism (35 points out of 100) and heavily penalizes any unnecessary changes.
Ask the AI agents any question about Hatch project management.
# Ask questions
hatch-agent How do I set up testing with pytest?
hatch-agent Configure my project for type checking
hatch-agent What's the best way to organize my Hatch environments?
# Show all agent suggestions
hatch-agent Setup CI/CD for my project --show-allEvery command uses the multi-agent system:
-
ConfigurationSpecialist focuses on:
- pyproject.toml structure
- Dependency management
- Build system configuration
- PEP 621 compliance
-
WorkflowSpecialist focuses on:
- Testing frameworks
- Code quality tools
- Development workflows
- Automation scripts
-
Judge uses a consistent scoring framework:
- Correctness (30 points)
- Completeness (25 points)
- Safety (20 points)
- Best Practices (15 points)
- Clarity (10 points)
This ensures similar inputs produce consistent, high-quality outputs.
All agents have comprehensive system prompts that:
- Define their expertise clearly
- Enforce structured output formats
- Ensure actionable recommendations
- Maintain consistency across similar inputs
Commands like add-dep can automatically:
- Parse AI suggestions into executable actions
- Modify
pyproject.tomlsafely - Run Hatch commands to sync environments
- Provide rollback guidance if needed
# Run this when your build fails
hatch-agent-explain
# Output shows:
# ✓ Tests: FAILED
# ✓ Formatting: PASSED
# ✓ Type checking: FAILED
#
# Then provides detailed analysis and fixes# Natural language request
hatch-agent-add-dep add black and ruff to my dev dependencies
# AI determines:
# - Package names: black, ruff
# - Target: optional-dependencies.dev
# - Versions: latest compatible
#
# Then modifies pyproject.toml and syncs environmentThrough strands-agents, supports:
- OpenAI (GPT-3.5, GPT-4)
- Anthropic (Claude)
- AWS Bedrock
- Azure OpenAI
- Google (PaLM, Gemini)
- Cohere
# Clone the repository
git clone https://github.com/your-org/hatch-agent
cd hatch-agent
# Install in development mode
pip install -e ".[dev]"
# Run tests
pytesthatch-agent/
├── src/hatch_agent/
│ ├── agent/
│ │ ├── core.py # Main Agent class
│ │ ├── llm.py # LLM client using strands-agents
│ │ └── multi_agent.py # Multi-agent orchestration
│ ├── analyzers/
│ │ ├── build.py # Build failure analysis
│ │ └── dependency.py # Dependency management
│ └── commands/
│ ├── explain.py # Build failure command
│ ├── add_dependency.py # Add dependency command
│ └── multi_task.py # General task command
MIT
Contributions welcome! Please feel free to submit a Pull Request.