hamzawaheed commited on
Commit
50982f6
·
verified ·
1 Parent(s): e242348

Training in progress, epoch 1

Browse files
Files changed (2) hide show
  1. README.md +78 -60
  2. model.safetensors +1 -1
README.md CHANGED
@@ -1,65 +1,83 @@
1
  ---
2
- library_name: transformers
3
- license: apache-2.0
4
- base_model: distilbert-base-uncased
5
  tags:
6
- - generated_from_trainer
 
 
 
 
7
  metrics:
8
- - accuracy
9
- model-index:
10
- - name: emotion-classification-model
11
- results: []
12
  ---
13
 
14
- <!-- This model card has been generated automatically according to the information the Trainer had access to. You
15
- should probably proofread and complete it, then remove this comment. -->
16
-
17
- # emotion-classification-model
18
-
19
- This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
20
- It achieves the following results on the evaluation set:
21
- - Loss: 0.1941
22
- - Accuracy: 0.926
23
-
24
- ## Model description
25
-
26
- More information needed
27
-
28
- ## Intended uses & limitations
29
-
30
- More information needed
31
-
32
- ## Training and evaluation data
33
-
34
- More information needed
35
-
36
- ## Training procedure
37
-
38
- ### Training hyperparameters
39
-
40
- The following hyperparameters were used during training:
41
- - learning_rate: 6e-05
42
- - train_batch_size: 16
43
- - eval_batch_size: 32
44
- - seed: 42
45
- - gradient_accumulation_steps: 2
46
- - total_train_batch_size: 32
47
- - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
48
- - lr_scheduler_type: linear
49
- - num_epochs: 2
50
- - mixed_precision_training: Native AMP
51
-
52
- ### Training results
53
-
54
- | Training Loss | Epoch | Step | Validation Loss | Accuracy |
55
- |:-------------:|:-----:|:----:|:---------------:|:--------:|
56
- | 0.2211 | 1.0 | 500 | 0.2383 | 0.915 |
57
- | 0.1274 | 2.0 | 1000 | 0.1941 | 0.926 |
58
-
59
-
60
- ### Framework versions
61
-
62
- - Transformers 4.46.2
63
- - Pytorch 2.5.1+cu118
64
- - Datasets 3.1.0
65
- - Tokenizers 0.20.3
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ language: en
 
 
3
  tags:
4
+ - emotion-classification
5
+ - text-classification
6
+ - distilbert
7
+ datasets:
8
+ - dair-ai/emotion
9
  metrics:
10
+ - accuracy
 
 
 
11
  ---
12
 
13
+ # Emotion Classification Model
14
+
15
+ ## Model Description
16
+ This model is a fine-tuned version of `distilbert-base-uncased` specifically adapted for emotion classification tasks. It leverages the DistilBERT architecture to categorize text inputs into six distinct emotions: joy, sadness, anger, fear, surprise, and disgust. The model is optimized for efficiency and performance, making it suitable for real-time applications where quick and accurate emotion detection is essential.
17
+
18
+ ## Intended Uses & Limitations
19
+ ### Intended Uses
20
+ - **Sentiment Analysis:** Analyzing customer feedback to gauge emotional responses.
21
+ - **Mental Health Monitoring:** Assisting in the detection of emotional states in therapeutic settings.
22
+ - **Social Media Analysis:** Understanding public sentiment and emotional trends on platforms like Twitter or Facebook.
23
+
24
+ ### Limitations
25
+ - **Bias in Training Data:** The model may inherit biases present in the `dair-ai/emotion` dataset, potentially affecting its performance across different demographics or contexts.
26
+ - **Contextual Understanding:** While effective at classifying isolated text snippets, the model may struggle with understanding nuanced emotions in longer, context-dependent conversations.
27
+ - **Language Constraints:** Currently optimized for English; performance may degrade with multilingual or non-English inputs.
28
+
29
+ ## Training and Evaluation Data
30
+ - **Training Dataset:** `dair-ai/emotion` containing approximately 16,000 labeled examples across six emotion categories.
31
+ - **Validation Dataset:** Subset of the training data reserved for evaluating model performance during training.
32
+ - **Test Dataset:** Separate evaluation set to assess the final performance metrics.
33
+ - **Preprocessing Steps:**
34
+ - Tokenization using `DistilBertTokenizerFast` with a maximum sequence length of 32 tokens.
35
+ - Padding and truncation to ensure uniform input size.
36
+
37
+ ## Training Procedure
38
+ ### Hyperparameters
39
+ - **Learning Rate:** 6e-5
40
+ - **Training Batch Size:** 16
41
+ - **Evaluation Batch Size:** 32
42
+ - **Number of Epochs:** 2
43
+ - **Learning Rate Scheduler:** Linear
44
+ - **Gradient Accumulation Steps:** 2
45
+ - **Mixed Precision Training:** Enabled (Native AMP) if CUDA is available
46
+
47
+ ### Training Results
48
+ | Epoch | Training Loss | Validation Loss | Validation Accuracy |
49
+ |-------|---------------|-----------------|----------------------|
50
+ | 1 | 0.2515 | 0.2269 | 0.9135 |
51
+ | 2 | 0.1768 | 0.1948 | 0.9245 |
52
+
53
+ ### Framework Versions
54
+ - **Transformers:** 4.46.2
55
+ - **PyTorch:** 2.5.1+cu118
56
+ - **Datasets:** 3.1.0
57
+ - **Tokenizers:** 0.20.3
58
+
59
+ ## Evaluation Results
60
+ - **Validation Accuracy:** 92.45%
61
+ - **Test Accuracy:** 0.9260
62
+ - **Training Time:** 2.75 minutes
63
+
64
+ ## Usage
65
+
66
+ ```python
67
+ from transformers import pipeline
68
+
69
+ # Initialize the emotion classification pipeline
70
+ classifier = pipeline(
71
+ "text-classification",
72
+ model="hamzawaheed/emotion-classification-model"
73
+ )
74
+
75
+ # Example text input
76
+ text = "I’m so happy today!"
77
+
78
+ # Perform emotion classification
79
+ result = classifier(text)
80
+
81
+ # Display the result
82
+ print(result)
83
+ ```
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:f8ae17a405d7f4c8c082a59c2a7c0e0cb2c189d40ba8fddba1e6478962797a64
3
  size 267844872
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f0468266d81b4314afa0d4877a669abe929dc75be79a98bd82ec98032f604133
3
  size 267844872