A full-stack AI Chat Application that streams responses in real-time using Ollama's LLMs like llama3
or gemma:1b
.
Built with:
- ⚡ Next.js (Frontend)
- 🧵 TailwindCSS (UI)
- 🌐 Express (Backend API)
- 🧠 Ollama for local LLM
- 🛢️ PostgreSQL for chat history
├── components/ # Reusable React components
│ ├── ChatSidebar.tsx
│ ├── ChatWindow.tsx
│ ├── Message.tsx
│ └── Sidebar.tsx
├── pages/ # Next.js pages and API routes
│ ├── api/
│ │ └── chat.ts # Serverless API endpoint
│ ├── _app.tsx # Global App component
│ └── index.tsx # Home page
├── prisma/ # Prisma ORM configuration
│ ├── migrations/ # Database migration history
│ ├── dev.db # SQLite database file
│ └── schema.prisma # Database schema definition
└── styles/
└── globals.css # Global CSS styles
- ✅ Realtime LLM chat (Ollama)
- ✅ Stream responses token-by-token
- ✅ Save chat & messages to database
- ✅ Switch between models (
llama3
,gemma
, etc.) - ✅ Clean Tailwind UI
- ✅ Backend written in Express.js
Tech | Role |
---|---|
Next.js | Frontend Framework |
TailwindCSS | Styling |
Express | Backend API |
PostgreSQL / SQLite | Database |
Ollama | Local LLM engine |
dotenv | Secure env config |
git clone https://github.com/your-username/ollama-chat-app.git
cd ollama-chat-app
curl -fsSL https://ollama.com/install.sh | sh
ollama pull llama3
ollama run llama3
cd backend
npm install
cp .env.example .env # Add your DB connection string
npm start
cd frontend
npm install
npm run dev
Endpoint | Method | Description |
---|---|---|
/api/chat |
POST | Create a new chat & stream response |
/api/chat/:chatId/messages |
POST | Send a message to an existing chat |
/api/chat/:chatId/stop |
POST | Stop response streaming |
/api/chats |
GET | List all chat sessions |
/api/chat/:chatId |
GET | Get message history of a session |
Field | Type |
---|---|
id | SERIAL |
title | VARCHAR |
created_at | TIMESTAMP |
Field | Type |
---|---|
id | SERIAL |
chat_id | INTEGER |
role | VARCHAR |
content | TEXT |
timestamp | TIMESTAMP |
You can use any model supported by Ollama:
ollama pull llama3
ollama pull gemma:1b
To switch models, update the payload or configuration:
model: 'llama3' // or 'gemma:1b'
Create a .env file:
DATABASE_URL=postgresql://username:password@localhost:5432/ollama_chat
OLLAMA_API=http://localhost:11434
✅ Authentication with JWT
📥 Export chats as PDF or TXT
🌍 Multilingual LLM support
🧪 Unit and integration tests
This project is licensed under the MIT License.
Made with 💖 by Rishabh Dhawad