A high-performance Go CLI tool that integrates local LLMs (via Ollama API) for code analysis, context-aware refactoring suggestions, and sophisticated prompt engineering within developer workflows.
- Code Analysis: Analyze code for quality, potential issues, and improvement opportunities
- Refactoring Suggestions: Get AI-powered suggestions to improve your code
- Documentation Generation: Automatically generate comprehensive documentation for your code
- Code Explanation: Get detailed explanations of complex code snippets
- Test Generation: Generate comprehensive tests for your code
- Performance Optimization: Get suggestions to optimize code performance
- Security Analysis: Identify potential security vulnerabilities in your code
- Code Comparison: Compare two code files and understand semantic differences
- Docstring Generation: Generate documentation for specific functions or methods
- Project Analysis: Analyze entire projects or directories of code files
- Local LLM Integration: Uses Ollama to run models locally for privacy and performance
- Go 1.16 or higher
- Ollama installed and running locally
# Clone the repository
git clone https://github.com/yourusername/cli-tool-go.git
cd cli-tool-go
# Build the binary
go build -o codeai
# Move to a directory in your PATH (optional)
mv codeai /usr/local/bin/
- Install Ollama from ollama.ai
- Pull a model (e.g.,
ollama pull llama2
orollama pull gemma:7b
) - Ensure Ollama is running (
ollama serve
in a terminal window) - Configure default settings (optional):
codeai config set ollama.host http://localhost:11434 codeai config set ollama.model llama2
codeai models
codeai analyze path/to/file.go --context "This is a web server handler" --stream
codeai refactor path/to/file.go --requirements "Improve error handling" --save
codeai document path/to/file.go --output documentation.md
codeai explain path/to/file.go --output explanation.md
codeai gentest path/to/file.go --requirements "Include tests for edge cases" --save
codeai optimize path/to/file.go --focus "Time complexity and memory usage"
codeai security path/to/file.go --language "Go"
codeai compare original.go updated.go --output comparison.md
# Generate docstring for a specific function
codeai docstring path/to/file.go functionName --clipboard
# Generate based on line numbers
codeai docstring path/to/file.go --line-start 10 --line-end 30
codeai analyze-project ./myproject --context "This is a REST API" --extensions go,md
# View configuration
codeai config
# Set a configuration value
codeai config set ollama.model gemma3:12b
# Get a configuration value
codeai config get ollama.host
Most commands support the following options:
--model
or-m
: Specify which LLM model to use--temperature
or-t
: Control randomness (0.0-1.0)--stream
or-S
: Stream the response in real-time--output
or-o
: Save results to a file--context
or-c
: Provide additional context
CodeAI works with any model available in Ollama, including:
- Llama 2/3
- Gemma/Gemma 3
- Mistral/Mixtral
- CodeLlama
- And many more
codeai/
├── cmd/ # CLI commands
├── pkg/
│ ├── ollama/ # Ollama API client
│ ├── template/ # Prompt templates
│ └── clipboard/ # Clipboard utilities
├── examples/ # Example code files
├── main.go # Entry point
└── README.md # Documentation
- Create a new file in the
cmd
directory - Define your command structure and logic
- Add command to root.go
Example:
var newCommand = &cobra.Command{
Use: "newcommand [file]",
Short: "Short description",
Long: `Longer description...`,
Args: cobra.ExactArgs(1),
RunE: func(cmd *cobra.Command, args []string) error {
// Command logic here
return nil
},
}
func init() {
rootCmd.AddCommand(newCommand)
// Add flags
}
The tool includes sophisticated error handling for common scenarios:
- Ollama not running or unavailable
- Invalid file paths
- Model loading issues
- Network connectivity problems
If Ollama is not running, the tool will provide helpful instructions to start it.