Training in progress, epoch 1
Browse files- README.md +78 -60
- model.safetensors +1 -1
README.md
CHANGED
|
@@ -1,65 +1,83 @@
|
|
| 1 |
---
|
| 2 |
-
|
| 3 |
-
license: apache-2.0
|
| 4 |
-
base_model: distilbert-base-uncased
|
| 5 |
tags:
|
| 6 |
-
-
|
|
|
|
|
|
|
|
|
|
|
|
|
| 7 |
metrics:
|
| 8 |
-
- accuracy
|
| 9 |
-
model-index:
|
| 10 |
-
- name: emotion-classification-model
|
| 11 |
-
results: []
|
| 12 |
---
|
| 13 |
|
| 14 |
-
|
| 15 |
-
|
| 16 |
-
|
| 17 |
-
|
| 18 |
-
|
| 19 |
-
|
| 20 |
-
|
| 21 |
-
-
|
| 22 |
-
-
|
| 23 |
-
|
| 24 |
-
|
| 25 |
-
|
| 26 |
-
|
| 27 |
-
|
| 28 |
-
|
| 29 |
-
|
| 30 |
-
|
| 31 |
-
|
| 32 |
-
|
| 33 |
-
|
| 34 |
-
|
| 35 |
-
|
| 36 |
-
|
| 37 |
-
|
| 38 |
-
##
|
| 39 |
-
|
| 40 |
-
|
| 41 |
-
-
|
| 42 |
-
-
|
| 43 |
-
-
|
| 44 |
-
-
|
| 45 |
-
-
|
| 46 |
-
-
|
| 47 |
-
|
| 48 |
-
|
| 49 |
-
|
| 50 |
-
-
|
| 51 |
-
|
| 52 |
-
|
| 53 |
-
|
| 54 |
-
|
| 55 |
-
|
| 56 |
-
|
| 57 |
-
|
| 58 |
-
|
| 59 |
-
|
| 60 |
-
##
|
| 61 |
-
|
| 62 |
-
-
|
| 63 |
-
-
|
| 64 |
-
|
| 65 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
---
|
| 2 |
+
language: en
|
|
|
|
|
|
|
| 3 |
tags:
|
| 4 |
+
- emotion-classification
|
| 5 |
+
- text-classification
|
| 6 |
+
- distilbert
|
| 7 |
+
datasets:
|
| 8 |
+
- dair-ai/emotion
|
| 9 |
metrics:
|
| 10 |
+
- accuracy
|
|
|
|
|
|
|
|
|
|
| 11 |
---
|
| 12 |
|
| 13 |
+
# Emotion Classification Model
|
| 14 |
+
|
| 15 |
+
## Model Description
|
| 16 |
+
This model is a fine-tuned version of `distilbert-base-uncased` specifically adapted for emotion classification tasks. It leverages the DistilBERT architecture to categorize text inputs into six distinct emotions: joy, sadness, anger, fear, surprise, and disgust. The model is optimized for efficiency and performance, making it suitable for real-time applications where quick and accurate emotion detection is essential.
|
| 17 |
+
|
| 18 |
+
## Intended Uses & Limitations
|
| 19 |
+
### Intended Uses
|
| 20 |
+
- **Sentiment Analysis:** Analyzing customer feedback to gauge emotional responses.
|
| 21 |
+
- **Mental Health Monitoring:** Assisting in the detection of emotional states in therapeutic settings.
|
| 22 |
+
- **Social Media Analysis:** Understanding public sentiment and emotional trends on platforms like Twitter or Facebook.
|
| 23 |
+
|
| 24 |
+
### Limitations
|
| 25 |
+
- **Bias in Training Data:** The model may inherit biases present in the `dair-ai/emotion` dataset, potentially affecting its performance across different demographics or contexts.
|
| 26 |
+
- **Contextual Understanding:** While effective at classifying isolated text snippets, the model may struggle with understanding nuanced emotions in longer, context-dependent conversations.
|
| 27 |
+
- **Language Constraints:** Currently optimized for English; performance may degrade with multilingual or non-English inputs.
|
| 28 |
+
|
| 29 |
+
## Training and Evaluation Data
|
| 30 |
+
- **Training Dataset:** `dair-ai/emotion` containing approximately 16,000 labeled examples across six emotion categories.
|
| 31 |
+
- **Validation Dataset:** Subset of the training data reserved for evaluating model performance during training.
|
| 32 |
+
- **Test Dataset:** Separate evaluation set to assess the final performance metrics.
|
| 33 |
+
- **Preprocessing Steps:**
|
| 34 |
+
- Tokenization using `DistilBertTokenizerFast` with a maximum sequence length of 32 tokens.
|
| 35 |
+
- Padding and truncation to ensure uniform input size.
|
| 36 |
+
|
| 37 |
+
## Training Procedure
|
| 38 |
+
### Hyperparameters
|
| 39 |
+
- **Learning Rate:** 6e-5
|
| 40 |
+
- **Training Batch Size:** 16
|
| 41 |
+
- **Evaluation Batch Size:** 32
|
| 42 |
+
- **Number of Epochs:** 2
|
| 43 |
+
- **Learning Rate Scheduler:** Linear
|
| 44 |
+
- **Gradient Accumulation Steps:** 2
|
| 45 |
+
- **Mixed Precision Training:** Enabled (Native AMP) if CUDA is available
|
| 46 |
+
|
| 47 |
+
### Training Results
|
| 48 |
+
| Epoch | Training Loss | Validation Loss | Validation Accuracy |
|
| 49 |
+
|-------|---------------|-----------------|----------------------|
|
| 50 |
+
| 1 | 0.2515 | 0.2269 | 0.9135 |
|
| 51 |
+
| 2 | 0.1768 | 0.1948 | 0.9245 |
|
| 52 |
+
|
| 53 |
+
### Framework Versions
|
| 54 |
+
- **Transformers:** 4.46.2
|
| 55 |
+
- **PyTorch:** 2.5.1+cu118
|
| 56 |
+
- **Datasets:** 3.1.0
|
| 57 |
+
- **Tokenizers:** 0.20.3
|
| 58 |
+
|
| 59 |
+
## Evaluation Results
|
| 60 |
+
- **Validation Accuracy:** 92.45%
|
| 61 |
+
- **Test Accuracy:** 0.9260
|
| 62 |
+
- **Training Time:** 2.75 minutes
|
| 63 |
+
|
| 64 |
+
## Usage
|
| 65 |
+
|
| 66 |
+
```python
|
| 67 |
+
from transformers import pipeline
|
| 68 |
+
|
| 69 |
+
# Initialize the emotion classification pipeline
|
| 70 |
+
classifier = pipeline(
|
| 71 |
+
"text-classification",
|
| 72 |
+
model="hamzawaheed/emotion-classification-model"
|
| 73 |
+
)
|
| 74 |
+
|
| 75 |
+
# Example text input
|
| 76 |
+
text = "I’m so happy today!"
|
| 77 |
+
|
| 78 |
+
# Perform emotion classification
|
| 79 |
+
result = classifier(text)
|
| 80 |
+
|
| 81 |
+
# Display the result
|
| 82 |
+
print(result)
|
| 83 |
+
```
|
model.safetensors
CHANGED
|
@@ -1,3 +1,3 @@
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:
|
| 3 |
size 267844872
|
|
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:f0468266d81b4314afa0d4877a669abe929dc75be79a98bd82ec98032f604133
|
| 3 |
size 267844872
|