Spaces:
Sleeping
Sleeping
File size: 4,911 Bytes
f392216 48e7bd1 f392216 f27d99c 83bc15a f27d99c 83bc15a f27d99c 83bc15a f27d99c 83bc15a f27d99c 4a001ac 83bc15a f27d99c 83bc15a f27d99c 83bc15a f27d99c 83bc15a f27d99c 83bc15a f27d99c 83bc15a f27d99c 83bc15a f27d99c 83bc15a f27d99c 83bc15a f27d99c 83bc15a f27d99c 83bc15a f27d99c 83bc15a f27d99c 83bc15a f27d99c 83bc15a f27d99c 83bc15a f27d99c 83bc15a f27d99c 83bc15a f27d99c 83bc15a f27d99c 83bc15a f27d99c 83bc15a f27d99c 83bc15a f27d99c 83bc15a f27d99c 83bc15a f27d99c 83bc15a f27d99c 83bc15a f27d99c 83bc15a f27d99c 83bc15a f27d99c 83bc15a 4a001ac 83bc15a 4a001ac f27d99c 4a001ac |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 |
---
title: Chatbot Final
emoji: π¦
colorFrom: red
colorTo: yellow
sdk: docker
app_port: 8501
pinned: false
license: mit
---
---
# π€ Chatbot with Voice + Text | React + FastAPI + Ollama
This is a full-stack chatbot application supporting both **text** and **voice input**, powered by:
* **π§ Ollama (LLM)** β for chatbot responses using `krishna_choudhary/tinyllama`
* **π£οΈ Ollama (Whisper STT)** β for voice-to-text transcription using `anagram/whispertiny`
* **βοΈ React** β for a modern, responsive chat interface
* **β‘ FastAPI** β for backend API endpoints
---
## π Fine-Tuning Dataset & Training Code
π **Looking to fine-tune the chatbot yourself?**
Check out the open-source dataset and training scripts used to fine-tune `tinyllama`:
π **GitHub Repository:** [Byte-Maste/krishna\_Chatbot\_Dataset](https://github.com/Byte-Maste/krishna_Chatbot_Dataset)
This repository includes:
* ποΈ Custom chatbot dialogue dataset
* π§ͺ Fine-tuning scripts and setup
* π Prompt formatting for LLM compatibility
---
## π Features
* π¬ **Interactive Chat UI** β Clean and responsive frontend built in React
* π€ **Voice Input (STT)** β Record your voice and transcribe using Whisper
* π§ **Ollama LLM** β Generate AI responses locally using TinyLlama (or your preferred model)
* π **Streaming Responses** β Responses are streamed back for a natural chat flow
* π³ **Dockerized** β Easy deployment using Docker
* π **Modular Architecture** β Clean separation of concerns (React frontend, FastAPI backend)
---
## π§© Tech Stack
| Layer | Tech |
| -------- | -------------------------------------- |
| Frontend | React (Vite) |
| Backend | FastAPI |
| LLM | Ollama - `krishna_choudhary/tinyllama` |
| STT | Ollama - `anagram/whispertiny` |
| Audio | Web Audio API |
| Infra | Docker + Nginx (Production) |
---
## ποΈ Project Structure
```
.
βββ Dockerfile # Full app container (React + FastAPI + Ollama)
βββ start.sh # Start script: launches Ollama & FastAPI
βββ main.py # FastAPI server logic (LLM + STT endpoints)
βββ requirements.txt # Python dependencies
βββ nginx.conf # Nginx config for proxying & serving frontend
βββ frontend/
βββ src/App.jsx # React component with chat logic
βββ src/App.css # App styles
βββ dist/ # Production build output (after npm build)
```
---
## π οΈ Local Development (No Docker)
### 1. Clone Repo
```bash
git clone <your-repo-url>
cd <your-repo-directory>
```
### 2. Backend Setup (FastAPI)
```bash
pip install -r requirements.txt
```
### 3. Frontend Setup (React)
```bash
cd frontend
npm install
npm run build
cd ..
```
### 4. Start Ollama Locally
```bash
ollama pull krishna_choudhary/tinyllama
ollama pull anagram/whispertiny
ollama serve
```
### 5. Run FastAPI Server
```bash
uvicorn main:app --host 0.0.0.0 --port 7860
```
Then access via `http://localhost:7860`
---
## π³ Docker Deployment (Recommended)
### 1. Build Docker Image
```bash
docker build -t chatbot-ollama-app .
```
### 2. Run Container
```bash
docker run -p 8501:8501 \
-e OLLAMA_HOST="http://127.0.0.1:11434" \
-e MODEL_NAME="krishna_choudhary/tinyllama" \
-e WHISPER_MODEL_NAME="anagram/whispertiny" \
chatbot-ollama-app
```
### 3. Access App
Open your browser at:
π `http://localhost:8501`
---
## ποΈ Using the Chatbot
* π§βπ» **Text Input**: Type a question and hit **Send**.
* π£οΈ **Voice Input**: Click the mic icon, speak, then click again to transcribe and submit.
---
## π§ Customization Options
| What | How to Customize |
| ------------- | -------------------------------------------------------- |
| LLM Model | Change `MODEL_NAME` in `main.py` or `start.sh` |
| Whisper Model | Update `WHISPER_MODEL_NAME` in `main.py` or `start.sh` |
| Ollama Host | Modify `OLLAMA_HOST_URL` in `main.py` |
| Token Speed | Adjust `asyncio.sleep()` delay inside streaming function |
---
## π§ͺ Quick Test Without Nginx
In development, React can directly call backend:
* Set API base in frontend to `http://localhost:7860/api/`
* Run React in dev mode:
```bash
cd frontend
npm run dev
```
---
## π€ License & Credits
* Built with β€οΈ using [FastAPI](https://fastapi.tiangolo.com/), [React](https://reactjs.org/), and [Ollama](https://ollama.com/)
* Models:
* `krishna_choudhary/tinyllama`
* `anagram/whispertiny`
* Dataset & Fine-tuning:
* π [Byte-Maste/krishna\_Chatbot\_Dataset](https://github.com/Byte-Maste/krishna_Chatbot_Dataset)
---
|