MindPalace is a private, local-first AI assistant that lets you upload and chat with your documents. Inspired by Sherlock Holmes' mental filing system, it parses and stores knowledge from your files into an in-memory vector store, ready to be queried via natural language.
- 🧾 Upload PDFs, text files, images (OCR), or URLs
- 🧠 Ask questions using a local LLM (via Ollama)
- 🗃️ All processing happens locally — no cloud, no leaks
- 🧩 Modular parser architecture
- 🎛️ Simple Gradio UI
- LangChain – Orchestration & document processing
- Ollama – Run LLMs like LLaMA3, Mistral, or Gemma locally
- FAISS – In-memory vector store
- Gradio – Web UI
- Tesseract OCR – For image-to-text conversion
# Clone the repo
git clone https://github.com/SherLock707/MindPalace.git
cd MindPalace
# Install dependencies
pip install -r requirements.txt
# Optional: Ensure Tesseract is installed
# Ubuntu: sudo apt install tesseract-ocr
# macOS: brew install tesseract
Start your local LLM first:
ollama run llama3
Then start the MindPalace server:
python main.py
MindPalace/
├── main.py
├── requirements.txt
├── README.md
├── helpers/
│ ├── parser_factory.py
│ ├── vectorstore.py
│ ├── llm_chain.py
│ ├── ui.py
│ └── parsers/
│ ├── pdf_parser.py
│ ├── txt_parser.py
│ ├── image_parser.py
│ └── url_parser.py
MindPalace runs entirely on your machine. No API calls, no telemetry, no vendor lock-in.
- Add persistent vector DB option (Chroma, SQLite)
- Highlight citations in responses
- Export & reload sessions
- CLI interface for headless mode
MIT License. Yours to hack, extend, and evolve.
🧑💻 Author
Built by an engineer who believes your documents should stay yours. Let AI read your files without sending them anywhere else.