|
|
--- |
|
|
language: |
|
|
- en |
|
|
license: apache-2.0 |
|
|
tags: |
|
|
- emotion-classification |
|
|
- mental-health |
|
|
- llama-3.1 |
|
|
- unsloth |
|
|
- lora |
|
|
- peft |
|
|
- text-generation |
|
|
base_model: unsloth/Meta-Llama-3.1-8B-Instruct |
|
|
datasets: |
|
|
- google-research-datasets/go_emotions |
|
|
- emotion |
|
|
- cardiffnlp/tweet_eval |
|
|
library_name: transformers |
|
|
pipeline_tag: text-generation |
|
|
--- |
|
|
|
|
|
# Fine-Tuned Emotion Classification Model |
|
|
|
|
|
## Model Information |
|
|
- **Base Model**: unsloth/Meta-Llama-3.1-8B-Instruct |
|
|
- **Training Method**: LoRA (Low-Rank Adaptation) |
|
|
- **LoRA Rank**: 32 |
|
|
- **Training Samples**: 56,400 |
|
|
- **Datasets Used**: GoEmotions, Emotion, TweetEval |
|
|
|
|
|
## How to Load This Model |
|
|
|
|
|
```python |
|
|
from unsloth import FastLanguageModel |
|
|
|
|
|
# Load the fine-tuned model |
|
|
model, tokenizer = FastLanguageModel.from_pretrained( |
|
|
model_name="emotion_model_finetuned", |
|
|
max_seq_length=2048, |
|
|
dtype=None, |
|
|
load_in_4bit=True, |
|
|
) |
|
|
|
|
|
# Enable inference mode |
|
|
FastLanguageModel.for_inference(model) |
|
|
|
|
|
# Use the model |
|
|
prompt = """<|im_start|>system |
|
|
You are a compassionate mental health support assistant.<|im_end|> |
|
|
<|im_start|>user |
|
|
I'm feeling anxious about tomorrow.<|im_end|> |
|
|
<|im_start|>assistant |
|
|
""" |
|
|
|
|
|
inputs = tokenizer(prompt, return_tensors="pt").to("cuda") |
|
|
outputs = model.generate(**inputs, max_new_tokens=128) |
|
|
response = tokenizer.decode(outputs[0], skip_special_tokens=True) |
|
|
print(response) |
|
|
``` |
|
|
|
|
|
## Files Included |
|
|
- `adapter_config.json` - LoRA adapter configuration |
|
|
- `adapter_model.safetensors` - Fine-tuned weights |
|
|
- `tokenizer.json` - Tokenizer files |
|
|
- `training_config.json` - Training hyperparameters |
|
|
|
|
|
|