RoBERTa Depression Severity Classifier
Model Description
This is a RoBERTa-base model fine-tuned for depression severity classification into 4 categories:
- Minimum (BDI-3: 0-9)
- Mild (BDI-3: 10-18)
- Moderate (BDI-3: 19-29)
- Severe (BDI-3: 30-63)
Model Details
- Base Model: `roberta-base'
- Total Parameters: 124,648,708
- Trainable Parameters: 1,183,492 (0.95%)
- Model Size: ~10-20MB (adapters only)
Performance
| Metric | Score |
|---|---|
| Accuracy | 0.7949 |
| F1-Score | 0.7927 |
| Precision | 0.7916 |
| Recall | 0.7949 |
Training Details
- Epochs: 5
- Batch Size: 32
- Learning Rate: 0.0003
- Optimizer: AdamW
- LR Scheduler: Cosine
Usage
Installation
pip install transformers peft torch
Inference
from peft import AutoPeftModelForSequenceClassification
from transformers import AutoTokenizer, pipeline
# Load model
model = AutoPeftModelForSequenceClassification.from_pretrained(
"Akashpaul123/roberta-depression-severity-lora"
)
tokenizer = AutoTokenizer.from_pretrained("Akashpaul123/roberta-depression-severity-lora")
# Create pipeline
classifier = pipeline(
"text-classification",
model=model,
tokenizer=tokenizer,
device=0, # Use GPU if available, -1 for CPU
return_all_scores=True
)
# Predict
text = "I've been feeling really down and hopeless lately"
result = classifier(text)
print(result)
Direct Model Usage
from peft import AutoPeftModelForSequenceClassification
from transformers import AutoTokenizer
import torch
# Load model and tokenizer
model = AutoPeftModelForSequenceClassification.from_pretrained("Akashpaul123/roberta-depression-severity-lora")
tokenizer = AutoTokenizer.from_pretrained("Akashpaul123/roberta-depression-severity-lora")
# Prepare input
text = "I'm struggling with constant sadness and anxiety"
inputs = tokenizer(text, return_tensors="pt", padding=True, truncation=True, max_length=256)
# Get prediction
with torch.no_grad():
outputs = model(**inputs)
predictions = torch.softmax(outputs.logits, dim=-1)
predicted_class = torch.argmax(predictions, dim=-1)
# Map to labels
labels = {0: 'minimum', 1: 'mild', 2: 'moderate', 3: 'severe'}
print(f"Predicted: {labels[predicted_class.item()]}")
print(f"Confidence: {predictions[0][predicted_class].item():.4f}")
Label Mapping
| Label ID | Severity Level | BDI-3 Score Range |
|---|---|---|
| 0 | Minimum | 0-9 |
| 1 | Mild | 10-18 |
| 2 | Moderate | 19-29 |
| 3 | Severe | 30-63 |
Intended Use
Primary Use Cases
- Mental health screening tools
- Research on depression detection
- Crisis detection systems
- Educational purposes
Limitations
- NOT a diagnostic tool - For research and screening only
- Should be used alongside professional mental health assessment
- Performance may vary on different populations
- Text-based analysis has inherent limitations
Ethical Considerations
⚠️ Important: This model is for research and screening purposes only. It should NOT be used as:
- A replacement for professional mental health diagnosis
- A standalone crisis intervention tool
- Medical decision-making without human oversight
Always involve qualified mental health professionals for clinical decisions.
Citation
If you use this model, please cite:
@misc{roberta-depression-lora-2025,
author = {Akashpaul123},
title = {RoBERTa Depression Severity Classifier with LoRA},
year = {2025},
publisher = {Hugging Face},
howpublished = {\url{https://huggingface.co/Akashpaul123/roberta-depression-severity-lora}}
}
Training Data
Trained on Depression Severity Levels Dataset with balanced classes (4,352 samples per class).
Model Architecture
Built on roberta-base with LoRA adapters applied to query and value attention layers:
- LoRA Rank (r): 16
- LoRA Alpha: 32
- LoRA Dropout: 0.1
- Target Modules: ['query', 'value']
Contact
- Author: Akashpaul123
- Hugging Face: Akashpaul123
License
MIT License - See LICENSE file for details.
Disclaimer: This model is provided for research and educational purposes. Always consult qualified mental health professionals for clinical decisions.
- Downloads last month
- 8
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support
Model tree for Akashpaul123/roberta-depression-severity-lora
Base model
FacebookAI/roberta-baseEvaluation results
- Test Accuracyself-reported0.795
- F1 Scoreself-reported0.793