Skip to content

beagleboard-gsoc/BeagleMind-RAG-PoC

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

34 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

title emoji colorFrom colorTo sdk sdk_version app_file pinned
Beaglemind Rag Poc
👀
red
purple
gradio
5.35.0
app.py
false

Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference

BeagleMind CLI

An intelligent documentation assistant CLI tool for Beagleboard projects that uses RAG (Retrieval-Augmented Generation) to answer questions about codebases and documentation.

Features

  • Multi-backend LLM support: Use both cloud (Groq) and local (Ollama) language models
  • Intelligent search: Advanced semantic search with reranking and filtering
  • Rich CLI interface: Beautiful command-line interface with syntax highlighting
  • Persistent configuration: Save your preferences for seamless usage
  • Source attribution: Get references to original documentation and code

Installation

Development Installation

# Clone the repository
git clone <repository-url>
cd rag_poc

# Install in development mode
pip install -e .

# Or install dependencies manually
pip install -r requirements.txt

Using pip (when published)

pip install beaglemind-cli

Quick Start

1. Initialize BeagleMind

Before using the CLI, you need to initialize the system and load the knowledge base:

beaglemind init

This will:

  • Connect to the Milvus vector database
  • Load the document collection
  • Set up the QA system

2. List Available Models

See what language models are available:

# List all models
beaglemind list-models

# List models for specific backend
beaglemind list-models --backend groq
beaglemind list-models --backend ollama

3. Start Chatting

Ask questions about the documentation:

# Simple question
beaglemind chat -p "How do I configure the BeagleY-AI board?"

# With specific model and backend
beaglemind chat -p "Show me GPIO examples" --backend groq --model llama-3.3-70b-versatile

# With sources shown
beaglemind chat -p "What are the pin configurations?" --sources

CLI Commands

beaglemind init

Initialize the BeagleMind system.

Options:

  • --collection, -c: Collection name to use (default: beaglemind_docs)
  • --force, -f: Force re-initialization

Examples:

beaglemind init
beaglemind init --collection my_docs
beaglemind init --force

beaglemind list-models

List available language models.

Options:

  • --backend, -b: Show models for specific backend (groq/ollama)

Examples:

beaglemind list-models
beaglemind list-models --backend groq

beaglemind chat

Chat with BeagleMind using natural language.

Options:

  • --prompt, -p: Your question (required)
  • --backend, -b: LLM backend (groq/ollama)
  • --model, -m: Specific model to use
  • --temperature, -t: Response creativity (0.0-1.0)
  • --strategy, -s: Search strategy (adaptive/multi_query/context_aware/default)
  • --sources: Show source references

Examples:

# Basic usage
beaglemind chat -p "How to flash an image to BeagleY-AI?"

# Advanced usage
beaglemind chat \
  -p "Show me Python GPIO examples" \
  --backend groq \
  --model llama-3.3-70b-versatile \
  --temperature 0.2 \
  --strategy adaptive \
  --sources

# Code-focused questions
beaglemind chat -p "How to implement I2C communication?" --sources

# Documentation questions  
beaglemind chat -p "What are the system requirements?" --strategy context_aware

Configuration

BeagleMind stores configuration in ~/.beaglemind_cli_config.json:

{
  "collection_name": "beaglemind_docs",
  "default_backend": "groq", 
  "default_model": "llama-3.3-70b-versatile",
  "default_temperature": 0.3,
  "initialized": true
}

Environment Setup

For Groq (Cloud)

Set your Groq API key:

export GROQ_API_KEY="your-api-key-here"

For Ollama (Local)

  1. Install Ollama: https://ollama.ai
  2. Pull a supported model:
    ollama pull qwen3:1.7b
  3. Ensure Ollama is running:
    ollama serve

Vector Database

Configure Milvus/Zilliz connection in .env:

MILVUS_HOST=your-host
MILVUS_PORT=443
MILVUS_USER=your-username  
MILVUS_PASSWORD=your-password
MILVUS_TOKEN=your-token
MILVUS_URI=your-uri

Available Models

Groq (Cloud)

  • llama-3.3-70b-versatile
  • llama-3.1-8b-instant
  • gemma2-9b-it
  • meta-llama/llama-4-scout-17b-16e-instruct
  • meta-llama/llama-4-maverick-17b-128e-instruct

Ollama (Local)

  • qwen3:1.7b

Search Strategies

  • adaptive: Automatically selects best strategy based on question type
  • multi_query: Uses multiple related queries for comprehensive results
  • context_aware: Includes surrounding context from documents
  • default: Standard semantic search

Tips for Best Results

  1. Be specific: "How to configure GPIO pins on BeagleY-AI?" vs "GPIO help"

  2. Use technical terms: Include model names, component names, exact error messages

  3. Ask follow-up questions: Build on previous responses for deeper understanding

  4. Use --sources: See exactly where information comes from

  5. Try different strategies: Some work better for different question types

Troubleshooting

"BeagleMind is not initialized"

Run beaglemind init first.

"No API Key" for Groq

Set the GROQ_API_KEY environment variable.

"Service Down" for Ollama

Ensure Ollama is running: ollama serve

"Model not available"

Check beaglemind list-models for available options.

Development

Running from Source

# Make the script executable
chmod +x beaglemind

# Run directly
./beaglemind --help

# Or with Python
python -m src.cli --help

Adding New Models

Edit the model lists in src/cli.py:

GROQ_MODELS = [
    "new-model-name",
    # ... existing models
]

OLLAMA_MODELS = [
    "new-local-model",
    # ... existing models  
]

License

MIT License - see LICENSE file for details.

Support

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Jupyter Notebook 73.5%
  • Python 26.4%
  • Shell 0.1%