| # π©Ί MedicalChatBot | |
| **MedicalChatBot** is a medical domain-focused chatbot fine-tuned using **LoRA (Low-Rank Adaptation)** on top of [`mistralai/Mistral-7B-Instruct`](https://huggingface.co/mistralai/Mistral-7B-Instruct). | |
| It is designed for health education, medical Q&A, and research use only. | |
| --- | |
| ## π Overview | |
| - π§ Based on Mistral-7B-Instruct, a powerful instruction-following LLM | |
| - π§ Fine-tuned using [PEFT](https://github.com/huggingface/peft) + LoRA on a medical dataset | |
| - π Trained on: [`kberta2014/medical-chat-dataset`](https://huggingface.co/datasets/kberta2014/medical-chat-dataset) | |
| - β‘ Efficient: Only trains adapter layers instead of the full model | |
| - π¦ Deployment-ready: Compatible with Hugging Face `transformers`, `Gradio`, and Spaces | |
| --- | |
| ## π§ Prompt Format | |
| Use the model in the following format: | |
| ``` | |
| ### Instruction: | |
| <Your question> | |
| ### Input: | |
| <Optional additional context> | |
| ### Response: | |
| ``` | |
| Example: | |
| ``` | |
| ### Instruction: | |
| What are the symptoms of high blood pressure? | |
| ### Input: | |
| ### Response: | |
| ``` | |
| --- | |
| ## π¬ Example Usage | |
| ```python | |
| from transformers import pipeline | |
| pipe = pipeline("text-generation", model="kberta2014/MedicalChatBot", tokenizer="kberta2014/MedicalChatBot") | |
| prompt = '''### Instruction: | |
| What are common symptoms of diabetes? | |
| ### Input: | |
| ### Response: | |
| ''' | |
| output = pipe(prompt, max_new_tokens=200, temperature=0.7) | |
| print(output[0]["generated_text"]) | |
| ``` | |
| --- | |
| ## π€ Gradio Chatbot Interface | |
| ```python | |
| import gradio as gr | |
| from transformers import pipeline | |
| pipe = pipeline("text-generation", model="kberta2014/MedicalChatBot", tokenizer="kberta2014/MedicalChatBot") | |
| def chat(instruction, input_text=""): | |
| prompt = f"### Instruction:\n{instruction}\n\n### Input:\n{input_text}\n\n### Response:\n" | |
| return pipe(prompt, max_new_tokens=200, temperature=0.7)[0]["generated_text"] | |
| gr.Interface(fn=chat, | |
| inputs=["text", "text"], | |
| outputs="text", | |
| title="π©Ί MedicalChatBot", | |
| description="Ask medical questions and get responses from a fine-tuned LLM" | |
| ).launch() | |
| ``` | |
| --- | |
| ## ποΈ Training Configuration | |
| - **Model**: `mistralai/Mistral-7B-Instruct` | |
| - **Dataset**: [`kberta2014/medical-chat-dataset`](https://huggingface.co/datasets/kberta2014/medical-chat-dataset) | |
| - **Framework**: Hugging Face `transformers`, `peft`, `datasets` | |
| - **PEFT Config**: | |
| - `r=8`, `lora_alpha=16`, `target_modules=["q_proj", "v_proj"]` | |
| - `lora_dropout=0.05`, `bias="none"`, `task_type="CAUSAL_LM"` | |
| - **Training Time**: ~3 epochs on Colab T4 | |
| - **Batch Size**: 2 | |
| - **Learning Rate**: 2e-4 | |
| - **Precision**: bf16 / float16 | |
| --- | |
| ## π Training Metrics (Sample) | |
| | Metric | Value | | |
| |-------------------|-------------| | |
| | Training loss | ~1.02 | | |
| | Eval loss | ~0.94 | | |
| | Perplexity | ~2.6 | | |
| | Epochs | 3 | | |
| | Trainable params | ~7M (LoRA) | | |
| --- | |
| ## π§Ύ Citation | |
| If you use this model in your research or application, please cite: | |
| ```bibtex | |
| @misc{medicalchatbot2025, | |
| title={MedicalChatBot: A LoRA Fine-Tuned Mistral-7B Model for Medical QA}, | |
| author={kberta2014}, | |
| year={2025}, | |
| url={https://huggingface.co/kberta2014/MedicalChatBot}, | |
| note={Hugging Face model repository} | |
| } | |
| ``` | |
| --- | |
| ## β οΈ Disclaimer | |
| This model is intended for **research and educational purposes only**. | |
| It is **not a replacement for professional medical advice or diagnosis**. | |
| Always consult a licensed healthcare provider for real medical concerns. | |
| --- | |
| ## π License | |
| Apache 2.0 β same as the base model `mistralai/Mistral-7B-Instruct`. | |