YAML Metadata Warning:empty or missing yaml metadata in repo card
Check out the documentation for more information.
Model Name: BertWithKNN for PHQ-8/9 Depression Score Prediction
How to use
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("username/PHQ8-prototype")
model = AutoModel.from_pretrained("username/PHQ8-prototype", trust_remote_code=True)
inputs = tokenizer("I feel tired and down.", return_tensors="pt")
outputs = model.inference(**inputs)
print(outputs)
Model Description
This model combines a BERT-based encoder with a k-Nearest Neighbors (kNN) regression head to predict PHQ-8/9 depression scores.
It was trained on:
- MHD dataset for pretraining (Masked Language Modeling task)
- DAIC-WOZ dataset for downstream fine-tuning
Additionally, a simple rule-based sentiment/depression keyword classifier is integrated to adjust the prediction.
Intended Use
- Task: Mental health score prediction (PHQ-8/9) from text
- Input: English conversation transcripts (e.g., patient–clinician dialogues, daily self-reports)
- Output: Predicted depression score in the range [0, 27]
This model is for research and educational purposes only, not for clinical or medical use.
Training Details
Pretraining
- Dataset: MHD
- Objective: Masked Language Modeling (MLM)
Fine-tuning
- Dataset: DAIC-WOZ
- Objective: Depression score regression/classification
Post-processing
- Integrated with a simple rule-based classifier that captures keywords like depressed, hopeless, worthless and adjusts the score.
Evaluation Results
- Binary classification (depressed vs. non-depressed):
- Accuracy: 0.823
- F1-score: 0.724
- Regression (PHQ-8/9 score):
- MAE: 3.414
- Downloads last month
- 17
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support