Ollama integrations #357
Replies: 2 comments
-
Hi @browserstrangeness, Your issue is only with the embedding model, but I’m going to go over the full local setup to give you my personal example — as if I were writing this for myself: 🔧 How can you connect any embedding model to Agent-Zero locally without API keys and without OpenAI keys or models?
🧠 Required setupTo use models locally, you must select at least two different models:
Example:
🔌 Connecting models to Agent-Zero (local setup)1. Edit
|
Beta Was this translation helpful? Give feedback.
-
Embedding Model Ollama descarga el modelo en tu computadora y si te pide mas memoria pidele ayuda a copilot me funciono al 100% y no soy experto en la materia |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
In the Agent Zero settings, one can use it locally via ollama with models such as llama3.2 and qwen-coder2.5 in different places. I have attempted to try non-openai text embedding models with no luck. Are there any existing models that have been tested and working, or for now is the openai text embedding the only one that is compatible so far? Thanks for reading.
Beta Was this translation helpful? Give feedback.
All reactions