Spaces:
Sleeping
Sleeping
| title: Chatbot Final | |
| emoji: π¦ | |
| colorFrom: red | |
| colorTo: yellow | |
| sdk: docker | |
| app_port: 8501 | |
| pinned: false | |
| license: mit | |
| --- | |
| # π€ Chatbot with Voice + Text | React + FastAPI + Ollama | |
| This is a full-stack chatbot application supporting both **text** and **voice input**, powered by: | |
| * **π§ Ollama (LLM)** β for chatbot responses using `krishna_choudhary/tinyllama` | |
| * **π£οΈ Ollama (Whisper STT)** β for voice-to-text transcription using `anagram/whispertiny` | |
| * **βοΈ React** β for a modern, responsive chat interface | |
| * **β‘ FastAPI** β for backend API endpoints | |
| --- | |
| ## π Fine-Tuning Dataset & Training Code | |
| π **Looking to fine-tune the chatbot yourself?** | |
| Check out the open-source dataset and training scripts used to fine-tune `tinyllama`: | |
| π **GitHub Repository:** [Byte-Maste/krishna\_Chatbot\_Dataset](https://github.com/Byte-Maste/krishna_Chatbot_Dataset) | |
| This repository includes: | |
| * ποΈ Custom chatbot dialogue dataset | |
| * π§ͺ Fine-tuning scripts and setup | |
| * π Prompt formatting for LLM compatibility | |
| --- | |
| ## π Features | |
| * π¬ **Interactive Chat UI** β Clean and responsive frontend built in React | |
| * π€ **Voice Input (STT)** β Record your voice and transcribe using Whisper | |
| * π§ **Ollama LLM** β Generate AI responses locally using TinyLlama (or your preferred model) | |
| * π **Streaming Responses** β Responses are streamed back for a natural chat flow | |
| * π³ **Dockerized** β Easy deployment using Docker | |
| * π **Modular Architecture** β Clean separation of concerns (React frontend, FastAPI backend) | |
| --- | |
| ## π§© Tech Stack | |
| | Layer | Tech | | |
| | -------- | -------------------------------------- | | |
| | Frontend | React (Vite) | | |
| | Backend | FastAPI | | |
| | LLM | Ollama - `krishna_choudhary/tinyllama` | | |
| | STT | Ollama - `anagram/whispertiny` | | |
| | Audio | Web Audio API | | |
| | Infra | Docker + Nginx (Production) | | |
| --- | |
| ## ποΈ Project Structure | |
| ``` | |
| . | |
| βββ Dockerfile # Full app container (React + FastAPI + Ollama) | |
| βββ start.sh # Start script: launches Ollama & FastAPI | |
| βββ main.py # FastAPI server logic (LLM + STT endpoints) | |
| βββ requirements.txt # Python dependencies | |
| βββ nginx.conf # Nginx config for proxying & serving frontend | |
| βββ frontend/ | |
| βββ src/App.jsx # React component with chat logic | |
| βββ src/App.css # App styles | |
| βββ dist/ # Production build output (after npm build) | |
| ``` | |
| --- | |
| ## π οΈ Local Development (No Docker) | |
| ### 1. Clone Repo | |
| ```bash | |
| git clone <your-repo-url> | |
| cd <your-repo-directory> | |
| ``` | |
| ### 2. Backend Setup (FastAPI) | |
| ```bash | |
| pip install -r requirements.txt | |
| ``` | |
| ### 3. Frontend Setup (React) | |
| ```bash | |
| cd frontend | |
| npm install | |
| npm run build | |
| cd .. | |
| ``` | |
| ### 4. Start Ollama Locally | |
| ```bash | |
| ollama pull krishna_choudhary/tinyllama | |
| ollama pull anagram/whispertiny | |
| ollama serve | |
| ``` | |
| ### 5. Run FastAPI Server | |
| ```bash | |
| uvicorn main:app --host 0.0.0.0 --port 7860 | |
| ``` | |
| Then access via `http://localhost:7860` | |
| --- | |
| ## π³ Docker Deployment (Recommended) | |
| ### 1. Build Docker Image | |
| ```bash | |
| docker build -t chatbot-ollama-app . | |
| ``` | |
| ### 2. Run Container | |
| ```bash | |
| docker run -p 8501:8501 \ | |
| -e OLLAMA_HOST="http://127.0.0.1:11434" \ | |
| -e MODEL_NAME="krishna_choudhary/tinyllama" \ | |
| -e WHISPER_MODEL_NAME="anagram/whispertiny" \ | |
| chatbot-ollama-app | |
| ``` | |
| ### 3. Access App | |
| Open your browser at: | |
| π `http://localhost:8501` | |
| --- | |
| ## ποΈ Using the Chatbot | |
| * π§βπ» **Text Input**: Type a question and hit **Send**. | |
| * π£οΈ **Voice Input**: Click the mic icon, speak, then click again to transcribe and submit. | |
| --- | |
| ## π§ Customization Options | |
| | What | How to Customize | | |
| | ------------- | -------------------------------------------------------- | | |
| | LLM Model | Change `MODEL_NAME` in `main.py` or `start.sh` | | |
| | Whisper Model | Update `WHISPER_MODEL_NAME` in `main.py` or `start.sh` | | |
| | Ollama Host | Modify `OLLAMA_HOST_URL` in `main.py` | | |
| | Token Speed | Adjust `asyncio.sleep()` delay inside streaming function | | |
| --- | |
| ## π§ͺ Quick Test Without Nginx | |
| In development, React can directly call backend: | |
| * Set API base in frontend to `http://localhost:7860/api/` | |
| * Run React in dev mode: | |
| ```bash | |
| cd frontend | |
| npm run dev | |
| ``` | |
| --- | |
| ## π€ License & Credits | |
| * Built with β€οΈ using [FastAPI](https://fastapi.tiangolo.com/), [React](https://reactjs.org/), and [Ollama](https://ollama.com/) | |
| * Models: | |
| * `krishna_choudhary/tinyllama` | |
| * `anagram/whispertiny` | |
| * Dataset & Fine-tuning: | |
| * π [Byte-Maste/krishna\_Chatbot\_Dataset](https://github.com/Byte-Maste/krishna_Chatbot_Dataset) | |
| --- | |