A terminal user interface (TUI) application for managing local Ollama models, written in Rust.
lazyollama.mp4
- List Models: Displays a scrollable list of locally installed Ollama models.
- Search & Filter Models: Real-time search and filtering of installed models using
/key. - Run Models: Run any of the locally installed Ollama models.
- Inspect Models: Shows detailed information for the selected model (size, modification date, digest, family, parameters, etc.).
- Delete Models: Allows deleting the selected model with a confirmation prompt.
- Install Models: Allows to pull new models from the ollama registry with search and filter capabilities.
- Registry Search: Search and filter through available models in the Ollama registry during installation.
- Environment Variable: Uses
OLLAMA_HOSTenvironment variable for the Ollama API endpoint (defaults tohttp://localhost:11434).
- Rust toolchain (Install from rustup.rs)
- A running Ollama instance (ollama.com)
- Homebrew (macOS / Linux) (brew.sh)
Install using the official Homebrew tap.
Option 1 (Tap first, then install):
# Add the custom tap
brew tap webmatze/tap
# Install the tool
brew install lazyollamaOption 2 (Direct install):
Homebrew can automatically tap and install in one step if you provide the full formula name:
brew install webmatze/tap/lazyollamaUpgrading:
To upgrade to the latest version:
# Update Homebrew and all formulas (including lazyollama)
brew update
brew upgrade lazyollamaThis is the simplest way to build and install LazyOllama to a system-wide location:
# 1. Clone the repository
git clone https://github.com/webmatze/lazyollama.git
cd lazyollama
# 2. Run the installation script
chmod +x install.sh
./install.shThe script will:
- Check for required dependencies
- Build the release version
- Install it to the appropriate location for your OS (typically
/usr/local/binon Unix-like systems) - Set appropriate permissions
If you have Rust installed, you can install directly using Cargo:
# 1. Clone the repository
git clone https://github.com/webmatze/lazyollama.git
cd lazyollama
# 2. Install using cargo
cargo install --path .This will install the binary to your Cargo bin directory (typically ~/.cargo/bin/), which should be in your PATH.
If you prefer to manually build and place the binary:
# 1. Clone the repository
git clone https://github.com/webmatze/lazyollama.git
cd lazyollama
# 2. Build the application
cargo build --release
# 3. Copy the binary to a location in your PATH (optional)
# On Linux/macOS (may require sudo)
sudo cp target/release/lazyollama /usr/local/bin/The executable will be located at target/release/lazyollama.
- Linux/macOS: Installation to system directories (like
/usr/local/bin) typically requires root privileges (sudo). - Windows: The installation script will attempt to install to an appropriate location, but you may need to adjust your PATH environment variable.
After installation, verify that lazyollama is correctly installed and accessible:
# Check if the command is available
which lazyollama
# Run lazyollama
lazyollamaIf the command isn't found, ensure the installation location is in your PATH.
- Run the application:
lazyollama
- Set Custom Ollama Host (Optional):
If your Ollama instance is running on a different host or port, set the
OLLAMA_HOSTenvironment variable before running:export OLLAMA_HOST="http://your-ollama-host:port" lazyollama
q: Quit the application.h/?: Show/Hide help screen.
↓/j: Move selection down.↑/k: Move selection up.Enter: Run selected model in ollama.d: Initiate deletion of the selected model (shows confirmation).
/: Enter filter mode to search through installed models.Ctrl+C: Clear current filter.
Type: Enter search text to filter models in real-time.Backspace: Remove characters from search.←/→: Move cursor within search input.Enter: Confirm filter and return to model list.Esc: Cancel filter and clear search.
i: Open install dialog to browse and install new models from registry./: (During install) Filter available registry models.Ctrl+C: (During install) Clear registry filter.
y/Y: Confirm action (delete, install, etc.).n/N/Esc: Cancel action or go back.
LazyOllama provides powerful search capabilities for your installed models:
- Quick Search: Press
/from the main model list to enter filter mode - Real-time Filtering: Type to instantly filter models as you type
- Case-insensitive: Search works regardless of capitalization
- Partial Matching: Find models by typing any part of their name
- Visual Feedback: See filtered count in the title bar (e.g., "Models (filtered: 3/10)")
Example: Type "llama" to show only models containing "llama" in their name.
When installing new models (i key), you can also search through available models:
- Browse Registry: Press
ito see all available models from Ollama registry - Filter Registry: Press
/within the install dialog to filter available models - Find Models Fast: Quickly locate specific models from hundreds of available options
- Smart Filtering: Same real-time, case-insensitive search as local models
Example: In install mode, type "code" to find all code-related models like "codellama", "codegemma", etc.
Both local and registry filters support the same intuitive controls:
- Arrow keys or mouse: Position cursor anywhere in search text
- Backspace: Delete characters before cursor
- Ctrl+C: Instantly clear current filter
- Enter: Apply filter and return to browsing
- Esc: Cancel and clear filter
This project uses the following main Rust crates:
ratatui: For building the TUI.crossterm: Terminal manipulation backend forratatui.tokio: Asynchronous runtime.reqwest: HTTP client for interacting with the Ollama API.serde: For serializing/deserializing API data.humansize: For formatting file sizes.thiserror: For error handling boilerplate.dotenvy: (Optional) For loading.envfiles if needed.
See Cargo.toml for the full list and specific versions.
The application follows a simple event loop architecture:
- Initialization: Sets up the terminal, initializes
AppState, and fetches the initial list of models from the Ollama API. - Event Loop:
- Draws the UI based on the current
AppState. - Checks for user input (keyboard events) and results from background tasks (via channels).
- Handles input: Updates
AppState(e.g., changes selection, enters delete mode, quits). - Handles background task results (e.g., updates model details).
- Triggers background tasks (e.g., fetching model details) when necessary.
- Draws the UI based on the current
- Cleanup: Restores the terminal state on exit.
graph TD
A[User Input] --> B[Event Loop]
B --> C[AppState]
C --> D[UI Renderer]
D --> E[Terminal Display]
B --> F[Background Tasks]
F --> G[Ollama API]
G --> F
F --> C
subgraph Event Handler
B
C
end
subgraph UI Layer
D
E
end
subgraph API Layer
F
G
end
- Connection Errors: Ensure your Ollama instance is running and accessible at the specified
OLLAMA_HOST(or the defaulthttp://localhost:11434). Check firewalls if necessary. - API Errors: If the Ollama API returns errors, they should be displayed in the status bar. Refer to the Ollama server logs for more details.
- Rendering Issues: Terminal rendering can vary. Ensure you are using a modern terminal emulator with good Unicode and color support.
