chatbot_final / README.md
Krish-05's picture
Update README.md
4a001ac verified
metadata
title: Chatbot Final
emoji: πŸ¦€
colorFrom: red
colorTo: yellow
sdk: docker
app_port: 8501
pinned: false
license: mit

πŸ€– Chatbot with Voice + Text | React + FastAPI + Ollama

This is a full-stack chatbot application supporting both text and voice input, powered by:

  • 🧠 Ollama (LLM) β€” for chatbot responses using krishna_choudhary/tinyllama
  • πŸ—£οΈ Ollama (Whisper STT) β€” for voice-to-text transcription using anagram/whispertiny
  • βš›οΈ React β€” for a modern, responsive chat interface
  • ⚑ FastAPI β€” for backend API endpoints

πŸ“‚ Fine-Tuning Dataset & Training Code

πŸ‘‰ Looking to fine-tune the chatbot yourself? Check out the open-source dataset and training scripts used to fine-tune tinyllama:

πŸ”— GitHub Repository: Byte-Maste/krishna_Chatbot_Dataset

This repository includes:

  • πŸ—ƒοΈ Custom chatbot dialogue dataset
  • πŸ§ͺ Fine-tuning scripts and setup
  • πŸ“Š Prompt formatting for LLM compatibility

πŸš€ Features

  • πŸ’¬ Interactive Chat UI – Clean and responsive frontend built in React
  • 🎀 Voice Input (STT) – Record your voice and transcribe using Whisper
  • 🧠 Ollama LLM – Generate AI responses locally using TinyLlama (or your preferred model)
  • πŸ” Streaming Responses – Responses are streamed back for a natural chat flow
  • 🐳 Dockerized – Easy deployment using Docker
  • πŸ›  Modular Architecture – Clean separation of concerns (React frontend, FastAPI backend)

🧩 Tech Stack

Layer Tech
Frontend React (Vite)
Backend FastAPI
LLM Ollama - krishna_choudhary/tinyllama
STT Ollama - anagram/whispertiny
Audio Web Audio API
Infra Docker + Nginx (Production)

πŸ—‚οΈ Project Structure

.
β”œβ”€β”€ Dockerfile              # Full app container (React + FastAPI + Ollama)
β”œβ”€β”€ start.sh                # Start script: launches Ollama & FastAPI
β”œβ”€β”€ main.py                 # FastAPI server logic (LLM + STT endpoints)
β”œβ”€β”€ requirements.txt        # Python dependencies
β”œβ”€β”€ nginx.conf              # Nginx config for proxying & serving frontend
└── frontend/
    β”œβ”€β”€ src/App.jsx         # React component with chat logic
    β”œβ”€β”€ src/App.css         # App styles
    β”œβ”€β”€ dist/               # Production build output (after npm build)

πŸ› οΈ Local Development (No Docker)

1. Clone Repo

git clone <your-repo-url>
cd <your-repo-directory>

2. Backend Setup (FastAPI)

pip install -r requirements.txt

3. Frontend Setup (React)

cd frontend
npm install
npm run build
cd ..

4. Start Ollama Locally

ollama pull krishna_choudhary/tinyllama
ollama pull anagram/whispertiny
ollama serve

5. Run FastAPI Server

uvicorn main:app --host 0.0.0.0 --port 7860

Then access via http://localhost:7860


🐳 Docker Deployment (Recommended)

1. Build Docker Image

docker build -t chatbot-ollama-app .

2. Run Container

docker run -p 8501:8501 \
  -e OLLAMA_HOST="http://127.0.0.1:11434" \
  -e MODEL_NAME="krishna_choudhary/tinyllama" \
  -e WHISPER_MODEL_NAME="anagram/whispertiny" \
  chatbot-ollama-app

3. Access App

Open your browser at: πŸ‘‰ http://localhost:8501


πŸŽ™οΈ Using the Chatbot

  • πŸ§‘β€πŸ’» Text Input: Type a question and hit Send.
  • πŸ—£οΈ Voice Input: Click the mic icon, speak, then click again to transcribe and submit.

πŸ”§ Customization Options

What How to Customize
LLM Model Change MODEL_NAME in main.py or start.sh
Whisper Model Update WHISPER_MODEL_NAME in main.py or start.sh
Ollama Host Modify OLLAMA_HOST_URL in main.py
Token Speed Adjust asyncio.sleep() delay inside streaming function

πŸ§ͺ Quick Test Without Nginx

In development, React can directly call backend:

  • Set API base in frontend to http://localhost:7860/api/
  • Run React in dev mode:
cd frontend
npm run dev

🀝 License & Credits