A Python-based multiplatform chat application for local AI model inference. Built with Flet for cross-platform compatibility.
- 🔄 Full Ollama integration - Compatible with all Ollama setups
- 🔀 Model switching - Switch between different LLMs seamlessly
- 🌐 Network flexibility - Configure custom local IP addresses
- 📝 Rich markdown support - Displays tables, code blocks, and formatted text
- 🖥️ Cross-platform - Runs on Windows, macOS, and Linux
- Python 3.8+
- Ollama installed and running
-
Clone the repository
git clone https://github.com/Bestello/llam.git
-
Navigate to the project directory
cd llam
-
Install dependencies
pip install -r requirements.txt
-
Run the application
flet run main.py
To create an Android APK using Flet:
-
Install Flet build tools
pip install flet[build]
-
Build the APK
flet build apk
-
Build with custom options (optional)
flet build apk --project "LLAM Chat" --description "Local AI Chat App" --copyright "Your Name"
The APK will be generated in the build/apk
directory.
Note: You'll need Android SDK and Java Development Kit (JDK) installed for APK building. Check Flet's documentation for detailed setup instructions.
- 🖼️ Image support for multimodal models
- 🎤 Text-to-Speech (TTS) integration
- 🔊 Speech-to-Text (STT) capabilities
- 📚 Chat history persistence
- ℹ️ About section
- Codebase needs refactoring
- Improved error logging
- Better connection error handling
- Model loading optimization
- Chat context preservation
Contributions are welcome! This project is actively being improved. Feel free to:
- 🐛 Report bugs
- 💡 Suggest features
- 🔧 Submit pull requests
This project is licensed under the MIT License - see the LICENSE file for details.
⭐ Star this repo if you find it useful!