Update README.md
Browse files
README.md
CHANGED
|
@@ -9,4 +9,57 @@ base_model:
|
|
| 9 |
pipeline_tag: text-generation
|
| 10 |
tags:
|
| 11 |
- medical
|
| 12 |
-
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 9 |
pipeline_tag: text-generation
|
| 10 |
tags:
|
| 11 |
- medical
|
| 12 |
+
---
|
| 13 |
+
|
| 14 |
+
# Fine-Tuned LLaMA-3 8B Mental Health Conversational Model
|
| 15 |
+
|
| 16 |
+
## Model Overview
|
| 17 |
+
This is a **fine-tuned version of LLaMA-3 8B Instruct**, specifically adapted for **conversational mental health support**. The model has been fine-tuned using **LoRA / QLoRA techniques** and quantized to **4-bit** for efficient inference. It is ideal for applications requiring lightweight deployment without compromising the quality of responses.
|
| 18 |
+
|
| 19 |
+
- **Base Model:** LLaMA-3 8B Instruct
|
| 20 |
+
- **Fine-Tuning:** Mental health conversational dataset
|
| 21 |
+
- **Technique:** LoRA / QLoRA
|
| 22 |
+
- **Quantization:** 4-bit (GGUF)
|
| 23 |
+
- **File Format:** `model.Q4_K_M.gguf`
|
| 24 |
+
|
| 25 |
+
This model is optimized for generating empathetic, safe, and context-aware responses for mental health conversations. It is intended for research, personal, or educational use.
|
| 26 |
+
|
| 27 |
+
---
|
| 28 |
+
|
| 29 |
+
## How to Download
|
| 30 |
+
|
| 31 |
+
You can download the model using this [link](https://huggingface.co/Kush26/Mental_Health_ChatBot/blob/main/model.Q4_K_M.gguf):
|
| 32 |
+
|
| 33 |
+
|
| 34 |
+
> Replace the URL above with the actual download link where your model is hosted.
|
| 35 |
+
|
| 36 |
+
---
|
| 37 |
+
|
| 38 |
+
## Using in LM Studio
|
| 39 |
+
|
| 40 |
+
Follow these steps to use the model in **LM Studio**:
|
| 41 |
+
|
| 42 |
+
1. **Install LM Studio**
|
| 43 |
+
Download and install LM Studio from [https://lmstudio.ai](https://lmstudio.ai).
|
| 44 |
+
|
| 45 |
+
2. **Add the Model**
|
| 46 |
+
- Open LM Studio.
|
| 47 |
+
- Click **"Add Model"** or **"Load Local Model"**.
|
| 48 |
+
- Select the downloaded `model.Q4_K_M.gguf` file.
|
| 49 |
+
|
| 50 |
+
3. **Configure Model Settings**
|
| 51 |
+
- Choose appropriate **context length** (e.g., 2048 tokens).
|
| 52 |
+
- Enable **GPU acceleration** if available for faster inference.
|
| 53 |
+
- Adjust any **sampling parameters** (temperature, top-p) as needed.
|
| 54 |
+
|
| 55 |
+
4. **Start Chatting**
|
| 56 |
+
- Open a new chat session.
|
| 57 |
+
- Interact with the model for mental health conversations or research purposes.
|
| 58 |
+
|
| 59 |
+
---
|
| 60 |
+
|
| 61 |
+
## Notes
|
| 62 |
+
- This model is **not a substitute for professional mental health care**.
|
| 63 |
+
- Use responsibly and ensure privacy when handling sensitive conversations.
|
| 64 |
+
- Compatible with LM Studio version 1.9 and above.
|
| 65 |
+
|