# ๐Ÿฉบ MedicalChatBot **MedicalChatBot** is a medical domain-focused chatbot fine-tuned using **LoRA (Low-Rank Adaptation)** on top of [`mistralai/Mistral-7B-Instruct`](https://huggingface.co/mistralai/Mistral-7B-Instruct). It is designed for health education, medical Q&A, and research use only. --- ## ๐Ÿ“Œ Overview - ๐Ÿง  Based on Mistral-7B-Instruct, a powerful instruction-following LLM - ๐Ÿ”ง Fine-tuned using [PEFT](https://github.com/huggingface/peft) + LoRA on a medical dataset - ๐Ÿ“š Trained on: [`kberta2014/medical-chat-dataset`](https://huggingface.co/datasets/kberta2014/medical-chat-dataset) - โšก Efficient: Only trains adapter layers instead of the full model - ๐Ÿ“ฆ Deployment-ready: Compatible with Hugging Face `transformers`, `Gradio`, and Spaces --- ## ๐Ÿง  Prompt Format Use the model in the following format: ``` ### Instruction: ### Input: ### Response: ``` Example: ``` ### Instruction: What are the symptoms of high blood pressure? ### Input: ### Response: ``` --- ## ๐Ÿ’ฌ Example Usage ```python from transformers import pipeline pipe = pipeline("text-generation", model="kberta2014/MedicalChatBot", tokenizer="kberta2014/MedicalChatBot") prompt = '''### Instruction: What are common symptoms of diabetes? ### Input: ### Response: ''' output = pipe(prompt, max_new_tokens=200, temperature=0.7) print(output[0]["generated_text"]) ``` --- ## ๐Ÿค– Gradio Chatbot Interface ```python import gradio as gr from transformers import pipeline pipe = pipeline("text-generation", model="kberta2014/MedicalChatBot", tokenizer="kberta2014/MedicalChatBot") def chat(instruction, input_text=""): prompt = f"### Instruction:\n{instruction}\n\n### Input:\n{input_text}\n\n### Response:\n" return pipe(prompt, max_new_tokens=200, temperature=0.7)[0]["generated_text"] gr.Interface(fn=chat, inputs=["text", "text"], outputs="text", title="๐Ÿฉบ MedicalChatBot", description="Ask medical questions and get responses from a fine-tuned LLM" ).launch() ``` --- ## ๐Ÿ‹๏ธ Training Configuration - **Model**: `mistralai/Mistral-7B-Instruct` - **Dataset**: [`kberta2014/medical-chat-dataset`](https://huggingface.co/datasets/kberta2014/medical-chat-dataset) - **Framework**: Hugging Face `transformers`, `peft`, `datasets` - **PEFT Config**: - `r=8`, `lora_alpha=16`, `target_modules=["q_proj", "v_proj"]` - `lora_dropout=0.05`, `bias="none"`, `task_type="CAUSAL_LM"` - **Training Time**: ~3 epochs on Colab T4 - **Batch Size**: 2 - **Learning Rate**: 2e-4 - **Precision**: bf16 / float16 --- ## ๐Ÿ“Š Training Metrics (Sample) | Metric | Value | |-------------------|-------------| | Training loss | ~1.02 | | Eval loss | ~0.94 | | Perplexity | ~2.6 | | Epochs | 3 | | Trainable params | ~7M (LoRA) | --- ## ๐Ÿงพ Citation If you use this model in your research or application, please cite: ```bibtex @misc{medicalchatbot2025, title={MedicalChatBot: A LoRA Fine-Tuned Mistral-7B Model for Medical QA}, author={kberta2014}, year={2025}, url={https://huggingface.co/kberta2014/MedicalChatBot}, note={Hugging Face model repository} } ``` --- ## โš ๏ธ Disclaimer This model is intended for **research and educational purposes only**. It is **not a replacement for professional medical advice or diagnosis**. Always consult a licensed healthcare provider for real medical concerns. --- ## ๐Ÿ“„ License Apache 2.0 โ€” same as the base model `mistralai/Mistral-7B-Instruct`.