title | emoji | colorFrom | colorTo | sdk | sdk_version | app_file | pinned |
---|---|---|---|---|---|---|---|
Beaglemind Rag Poc |
👀 |
red |
purple |
gradio |
5.35.0 |
app.py |
false |
Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
An intelligent documentation assistant CLI tool for Beagleboard projects that uses RAG (Retrieval-Augmented Generation) to answer questions about codebases and documentation.
- Multi-backend LLM support: Use both cloud (Groq) and local (Ollama) language models
- Intelligent search: Advanced semantic search with reranking and filtering
- Rich CLI interface: Beautiful command-line interface with syntax highlighting
- Persistent configuration: Save your preferences for seamless usage
- Source attribution: Get references to original documentation and code
# Clone the repository
git clone <repository-url>
cd rag_poc
# Install in development mode
pip install -e .
# Or install dependencies manually
pip install -r requirements.txt
pip install beaglemind-cli
Before using the CLI, you need to initialize the system and load the knowledge base:
beaglemind init
This will:
- Connect to the Milvus vector database
- Load the document collection
- Set up the QA system
See what language models are available:
# List all models
beaglemind list-models
# List models for specific backend
beaglemind list-models --backend groq
beaglemind list-models --backend ollama
Ask questions about the documentation:
# Simple question
beaglemind chat -p "How do I configure the BeagleY-AI board?"
# With specific model and backend
beaglemind chat -p "Show me GPIO examples" --backend groq --model llama-3.3-70b-versatile
# With sources shown
beaglemind chat -p "What are the pin configurations?" --sources
Initialize the BeagleMind system.
Options:
--collection, -c
: Collection name to use (default: beaglemind_docs)--force, -f
: Force re-initialization
Examples:
beaglemind init
beaglemind init --collection my_docs
beaglemind init --force
List available language models.
Options:
--backend, -b
: Show models for specific backend (groq/ollama)
Examples:
beaglemind list-models
beaglemind list-models --backend groq
Chat with BeagleMind using natural language.
Options:
--prompt, -p
: Your question (required)--backend, -b
: LLM backend (groq/ollama)--model, -m
: Specific model to use--temperature, -t
: Response creativity (0.0-1.0)--strategy, -s
: Search strategy (adaptive/multi_query/context_aware/default)--sources
: Show source references
Examples:
# Basic usage
beaglemind chat -p "How to flash an image to BeagleY-AI?"
# Advanced usage
beaglemind chat \
-p "Show me Python GPIO examples" \
--backend groq \
--model llama-3.3-70b-versatile \
--temperature 0.2 \
--strategy adaptive \
--sources
# Code-focused questions
beaglemind chat -p "How to implement I2C communication?" --sources
# Documentation questions
beaglemind chat -p "What are the system requirements?" --strategy context_aware
BeagleMind stores configuration in ~/.beaglemind_cli_config.json
:
{
"collection_name": "beaglemind_docs",
"default_backend": "groq",
"default_model": "llama-3.3-70b-versatile",
"default_temperature": 0.3,
"initialized": true
}
Set your Groq API key:
export GROQ_API_KEY="your-api-key-here"
- Install Ollama: https://ollama.ai
- Pull a supported model:
ollama pull qwen3:1.7b
- Ensure Ollama is running:
ollama serve
Configure Milvus/Zilliz connection in .env
:
MILVUS_HOST=your-host
MILVUS_PORT=443
MILVUS_USER=your-username
MILVUS_PASSWORD=your-password
MILVUS_TOKEN=your-token
MILVUS_URI=your-uri
- llama-3.3-70b-versatile
- llama-3.1-8b-instant
- gemma2-9b-it
- meta-llama/llama-4-scout-17b-16e-instruct
- meta-llama/llama-4-maverick-17b-128e-instruct
- qwen3:1.7b
- adaptive: Automatically selects best strategy based on question type
- multi_query: Uses multiple related queries for comprehensive results
- context_aware: Includes surrounding context from documents
- default: Standard semantic search
-
Be specific: "How to configure GPIO pins on BeagleY-AI?" vs "GPIO help"
-
Use technical terms: Include model names, component names, exact error messages
-
Ask follow-up questions: Build on previous responses for deeper understanding
-
Use --sources: See exactly where information comes from
-
Try different strategies: Some work better for different question types
Run beaglemind init
first.
Set the GROQ_API_KEY environment variable.
Ensure Ollama is running: ollama serve
Check beaglemind list-models
for available options.
# Make the script executable
chmod +x beaglemind
# Run directly
./beaglemind --help
# Or with Python
python -m src.cli --help
Edit the model lists in src/cli.py
:
GROQ_MODELS = [
"new-model-name",
# ... existing models
]
OLLAMA_MODELS = [
"new-local-model",
# ... existing models
]
MIT License - see LICENSE file for details.
- GitHub Issues: https://github.com/beagleboard/beaglemind/issues
- Documentation: https://beaglemind.readthedocs.io
- Community: BeagleBoard forums