File size: 2,167 Bytes
84a37de
 
c676860
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
84a37de
c676860
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
a4d8d5c
2c98850
a4d8d5c
c676860
 
a4d8d5c
 
 
 
6fe8368
a4d8d5c
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
---
license: apache-2.0
tags:
- text-classification
- emotion
- NLP
- DeBERTa
language: en
datasets:
- GoEmotions
metrics:
- Training Accuracy
- Validation Accuracy
- Testing Accuracy
- Precisionm
- Recall
- F-1(micro)
pipeline_tag: text-classification
---
# Emotion DeBERTa – 5-Class Emotion Classifier

## Model Description

# Emotion DeBERTa – 5-Class Emotion Classifier

## Model Description

This model is a fine-tuned version of **DeBERTa-v3-base** for emotion classification.
It predicts one of five emotional states from input text:

- `anger`
- `fear`
- `joy`
- `sadness`
- `surprise`

The model was trained as part of a university capstone project focused on building an emotion-aware mental healthcare companion.

## Base Model

- `microsoft/deberta-v3-base`

## Training Details

- **Task:** Text-based emotion classification
- **Architecture:** DeBERTa encoder with a custom classification head
- **Number of labels:** 5
- **Training method:** Supervised fine-tuning
- **Output:** Single-label emotion prediction

The model was originally trained using a custom PyTorch class and later converted into Hugging Face format for deployment and reproducibility.

## Intended Use

This model is designed for:

- Emotion-aware chat applications
- Mental health companion systems
- Sentiment and emotional analysis in academic projects
- Research and educational purposes

It is **not** intended for clinical diagnosis or professional mental health decisions.

## Limitations

- Trained on a limited dataset
- May not generalize well to:
  - Slang-heavy text
  - Code-mixed or multilingual inputs
  - Highly sarcastic or ambiguous sentences
- Predictions should be treated as probabilistic, not factual

## Example Usage

```python
from transformers import pipeline

classifier = pipeline(
    task="text-classification",
    model="Sadman4701/Apricity-Final",
    return_all_scores=True
)

text = "I feel scared but also strangely hopeful about the future."

outputs = classifier(text)

THRESHOLD = 0.5 #change it according to your preferences
predicted_emotions = [
    o["label"] for o in outputs[0] if o["score"] >= THRESHOLD
]

print(predicted_emotions)