Mental-Health-Analysis
This model is a fine-tuned version of distilbert-base-uncased on the None dataset. It achieves the following results on the evaluation set:
- Accuracy: 0.9950
- F1: 0.9950
- Loss: 0.0299
- Precision: 0.9950
- Recall: 0.9950
Model description
Mental-Health-Analysis is a transformer-based NLP model fine-tuned from distilbert-base-uncased for mental health text classification.
The model analyzes social mediaβstyle text (tweets, Reddit posts, short personal messages) and predicts mental healthβrelated emotional states.
It is designed as a research and educational tool to explore how NLP can support early mental health signal detection.
Supported Classes
- Depression
- Anxiety
- Suicidal Ideation
- Happy
- Neutral / Casual
The project emphasizes ethical AI, social impact, and responsible usage.
Intended uses & limitations
β Intended Uses
- Academic research in NLP + mental health
- Educational demonstrations of text classification
- Prototyping mental healthβaware AI systems
- Social impact and AI-for-good projects
β Out-of-Scope Uses
- Clinical diagnosis
- Medical decision-making
- Crisis intervention without human oversight
β οΈ Disclaimer:
This model is not a substitute for professional mental health care. Predictions are probabilistic signals, not diagnoses.
Training and evaluation data
- Data Source: Curated and balanced dataset of social media text
- Platforms: Twitter, Reddit (public posts only)
- Task: 5-class text classification
- Data Split:
- 80% training
- 20% evaluation
Basic preprocessing steps:
- Missing text replaced with empty strings
- Text normalized and tokenized using DistilBERT tokenizer
- Padding and truncation applied to max sequence length
Training procedure
Model Architecture
- Base Model:
distilbert-base-uncased - Architecture Type: Transformer encoder
- Objective: Multi-class sequence classification (5 labels)
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 3
Training results
| Training Loss | Epoch | Step | Accuracy | F1 | Validation Loss | Precision | Recall |
|---|---|---|---|---|---|---|---|
| 0.2938 | 0.2006 | 500 | 0.9574 | 0.9581 | 0.1866 | 0.9612 | 0.9574 |
| 0.119 | 0.4013 | 1000 | 0.9768 | 0.9769 | 0.1090 | 0.9773 | 0.9768 |
| 0.0992 | 0.6019 | 1500 | 0.9819 | 0.9820 | 0.0926 | 0.9822 | 0.9819 |
| 0.0039 | 0.8026 | 2000 | 0.9876 | 0.9876 | 0.0731 | 0.9878 | 0.9876 |
| 0.1583 | 1.0032 | 2500 | 0.9911 | 0.9911 | 0.0475 | 0.9911 | 0.9911 |
| 0.0021 | 1.2039 | 3000 | 0.9915 | 0.9915 | 0.0448 | 0.9915 | 0.9915 |
| 0.033 | 1.4045 | 3500 | 0.9916 | 0.9916 | 0.0535 | 0.9916 | 0.9916 |
| 0.0332 | 1.6051 | 4000 | 0.9915 | 0.9915 | 0.0428 | 0.9915 | 0.9915 |
| 0.0008 | 1.8058 | 4500 | 0.9936 | 0.9936 | 0.0360 | 0.9936 | 0.9936 |
| 0.0263 | 2.0064 | 5000 | 0.9941 | 0.9941 | 0.0301 | 0.9941 | 0.9941 |
| 0.0214 | 2.2071 | 5500 | 0.9939 | 0.9939 | 0.0345 | 0.9939 | 0.9939 |
| 0.0006 | 2.4077 | 6000 | 0.9945 | 0.9945 | 0.0302 | 0.9945 | 0.9945 |
| 0.0114 | 2.6083 | 6500 | 0.9952 | 0.9952 | 0.0281 | 0.9952 | 0.9952 |
| 0.0418 | 2.8090 | 7000 | 0.9950 | 0.9950 | 0.0299 | 0.9950 | 0.9950 |
How to Use the Model
Quick Start (Pipeline)
from transformers import pipeline
classifier = pipeline(
"text-classification",
model="vedabtpatil07/Mental-Health-Analysis"
)
text = "I feel exhausted and hopeless lately."
result = classifier(text)
print(result)
Framework versions
- Transformers 4.56.2
- Pytorch 2.8.0+cu126
- Datasets 4.0.0
- Tokenizers 0.22.1
Citation
BibTeX
@misc{mental_health_analysis_2025,
title = {Mental Health Analysis: DistilBERT-Based Text Classification},
author = {Vedant Patil and Ansh Jaiswal},
year = {2025},
publisher = {Hugging Face},
howpublished = {\url{https://huggingface.co/vedabtpatil07/Mental-Health-Analysis}}
}
Model Card Authors
- Vedant Patil
- Ansh Jaiswal
- Downloads last month
- 8