🤖 QA ChatBot – TinyLlama Fine-tuned Chat Assistant This project demonstrates how to fine-tune a language model (TinyLlama-1.1B-Chat) using a question-answer dataset and serve it through a simple chatbot using FastAPI or interactive Python console. It includes training with QLoRA (Quantized Low-Rank Adaptation), LoRA and IA³ and inference for personal Q&A using a fine-tuned model.
📊 Dataset The dataset (QA.json) contains personal questions and answers in JSON format:
[ { "question": "What is your full name?", "answer": "My full name is Hardik Dhamel." }, ... ]
✅ Features
- 🔍 Q&A Matching: First checks if input matches predefined QAs
- 🧠 Fine-tuning: Uses QLoRA to fine-tune on limited hardware
- 💬 Chatbot Inference: Answers based on preset, similar, or generative logic
- 🚀 Model Deployment Ready: Compatible with FastAPI / Docker deployment
- 🛠️ Multi-level Inference: Static, factual, and creative response modes
- 🪶 Quantized Model: 4-bit memory-efficient fine-tuning
🚀 Model
- Base Model: TinyLlama/TinyLlama-1.1B-Chat-v1.0
- Adapter: QLoRA, LoRA, IA³
- Format: Causal LM (Chat format or [INST] prompt)