arafatanam's picture
Update README.md
4ee8cb4 verified
---
license: apache-2.0
datasets:
- arafatanam/Mental-Health-Couseling
- arafatanam/Student-Mental-Health-Counseling-10K
language:
- en
base_model:
- unsloth/llama-3-8b-Instruct-bnb-4bit
tags:
- mental-health
- student-focused
- chatbot
---
# LLaMA-3-8B-Instruct Fine-Tuned for Mental Health Counseling
## Model Overview
This is a fine-tuned version of [`unsloth/llama-3-8b-Instruct-bnb-4bit`](https://huggingface.co/unsloth/llama-3-8b-Instruct-bnb-4bit), adapted for mental health counseling applications. It is designed to provide thoughtful, relevant, and compassionate responses.
## Dataset
- **Amod/mental_health_counseling_conversations** (cleaned version: [`arafatanam/Mental-Health-Counseling`](https://huggingface.co/datasets/arafatanam/Mental-Health-Counseling)) - **2752 rows**
- **chillies/student-mental-health-counseling-vn** (translated version: [`arafatanam/Student-Mental-Health-Counseling-10K`](https://huggingface.co/datasets/arafatanam/Student-Mental-Health-Counseling-10K)) - **7500 rows**
- **Total dataset size**: 10,252 rows
## Training Details
- **Hardware**: Kaggle Notebooks (GPU T4 x2)
- **Fine-tuning framework**: `Unsloth` with `LoRA`
- **Training settings**:
- `max_seq_length = 512`
- `batch_size = 8`
- `gradient_accumulation_steps = 4`
- `num_train_epochs = 2`
- `learning_rate = 5e-5`
- `optimizer = adamw_8bit`
- `lr_scheduler = cosine`
## Training Results
- **Final training loss**: `1.2433`
- **Total steps**: `640`
- **Trainable parameters**: `0.52%` of the model`
- **Validation loss**: `1.182`
- **Evaluation metric** (perplexity): `3.15`
## Usage
This model can be applied to:
- AI-driven mental health chatbots
- Personalized therapy assistance
- Generating mental health support content