Skip to content

A full-stack AI Chat Application that streams responses in real-time using Ollama's LLMs like llama3 or gemma:1b.

Notifications You must be signed in to change notification settings

RishabhDhawad/chatgpt-clone

Repository files navigation

🧠 AI Chat App with Ollama, Next.js & PostgreSQL

A full-stack AI Chat Application that streams responses in real-time using Ollama's LLMs like llama3 or gemma:1b.

Built with:

  • Next.js (Frontend)
  • 🧵 TailwindCSS (UI)
  • 🌐 Express (Backend API)
  • 🧠 Ollama for local LLM
  • 🛢️ PostgreSQL for chat history

📁 Project Structure

    ├── components/             # Reusable React components
    │   ├── ChatSidebar.tsx
    │   ├── ChatWindow.tsx
    │   ├── Message.tsx
    │   └── Sidebar.tsx
    ├── pages/                  # Next.js pages and API routes
    │   ├── api/
    │   │   └── chat.ts         # Serverless API endpoint
    │   ├── _app.tsx            # Global App component
    │   └── index.tsx           # Home page
    ├── prisma/                 # Prisma ORM configuration
    │   ├── migrations/         # Database migration history
    │   ├── dev.db              # SQLite database file
    │   └── schema.prisma       # Database schema definition
    └── styles/
        └── globals.css         # Global CSS styles

🔧 Features

  • ✅ Realtime LLM chat (Ollama)
  • ✅ Stream responses token-by-token
  • ✅ Save chat & messages to database
  • ✅ Switch between models (llama3, gemma, etc.)
  • ✅ Clean Tailwind UI
  • ✅ Backend written in Express.js

🛠️ Tech Stack

Tech Role
Next.js Frontend Framework
TailwindCSS Styling
Express Backend API
PostgreSQL / SQLite Database
Ollama Local LLM engine
dotenv Secure env config

⚙️ Setup Instructions

1. Clone the repository

git clone https://github.com/your-username/ollama-chat-app.git
cd ollama-chat-app

2. Setup & Run Ollama

Install Ollama

curl -fsSL https://ollama.com/install.sh | sh

Pull the model

ollama pull llama3

Run it

ollama run llama3

3. Backend Setup (Express)

cd backend
npm install
cp .env.example .env  # Add your DB connection string
npm start

4. Frontend Setup (Next.js)

cd frontend
npm install
npm run dev

📦 API Endpoints

Endpoint Method Description
/api/chat POST Create a new chat & stream response
/api/chat/:chatId/messages POST Send a message to an existing chat
/api/chat/:chatId/stop POST Stop response streaming
/api/chats GET List all chat sessions
/api/chat/:chatId GET Get message history of a session

💾 Database Schema

chats Table

Field Type
id SERIAL
title VARCHAR
created_at TIMESTAMP

messages Table

Field Type
id SERIAL
chat_id INTEGER
role VARCHAR
content TEXT
timestamp TIMESTAMP

🧠 Available Models

You can use any model supported by Ollama:

ollama pull llama3
ollama pull gemma:1b

To switch models, update the payload or configuration:

model: 'llama3' // or 'gemma:1b'

🌱 Environment Variables

Create a .env file:

DATABASE_URL=postgresql://username:password@localhost:5432/ollama_chat
OLLAMA_API=http://localhost:11434

✨ Future Improvements

✅ Authentication with JWT

📥 Export chats as PDF or TXT

🌍 Multilingual LLM support

🧪 Unit and integration tests

📝 License

This project is licensed under the MIT License.

👥 Author

Made with 💖 by Rishabh Dhawad

About

A full-stack AI Chat Application that streams responses in real-time using Ollama's LLMs like llama3 or gemma:1b.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published