Image and video analyzer for Home Assistant using multimodal LLMs
🌟 Features · ⬇️ Quick Start Guide · 📖 Resources · 🪲 How to report Bugs · ☕ Support
LLM Vision is a Home Assistant integration that uses multimodal large language models to analyze images, videos, live camera feeds, and Frigate events. It can also keep track of analyzed events in a timeline, with an optional Timeline Card for your dashboard.
- Compatible with OpenAI, Anthropic Claude, Google Gemini, AWS Bedrock, Groq, Ollama, Open WebUI, LocalAI and providers with OpenAI compatible enpoints.
- Analyzes images and video files, live camera feeds and Frigate events
- Remembers people, pets and objects
- Maintains a timeline of camera events, so you can display them on your dashboard and ask Assist about them
- Seamlessly updates sensors based on data extracted from camera streams, images or videos
See the website for the latest features as well as examples.
Tip
LLM Vision is available in the default HACS repository. You can install it directly through HACS or click the button below to open it there.
- Install
LLM Vision
from HACS - Restart Home Assistant
- Search for
LLM Vision
in Home Assistant Settings/Devices & services - Press submit to continue setup with default settings
- Press 'Add Entry' to add your first AI Provider
Detailed setup instructions and documentation is available here: LLM Vision Documentation
With the easy to use blueprint, you'll get camera event notifications intelligently summarized by AI. LLM Vision can also store events in a timeline, so you can see what happened on your dashboard.
Learn how to install the blueprint
Check the docs for detailed instructions on how to set up LLM Vision and each of the supported providers, get inspiration from examples or join the discussion on the Home Assistant Community.
For technical questions see the discussions tab.
Important
Bugs: If you encounter any bugs and have followed the instructions carefully, file a bug report. Please check open issues first and include debug logs in your report. Debugging can be enabled on the integration's settings page. Feature Requests: If you have an idea for a feature, create a feature request.
You can support this project by starring this GitHub repository. If you want, you can also buy me a coffee here: