Panda0116 commited on
Commit
ccab078
·
verified ·
1 Parent(s): 00aa1ae

Panda0116/emotion-classification-model

Browse files
Files changed (2) hide show
  1. README.md +47 -73
  2. model.safetensors +1 -1
README.md CHANGED
@@ -1,89 +1,63 @@
1
- # emotion-classification-model
2
-
3
- This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the [dair-ai/emotion dataset](https://huggingface.co/datasets/dair-ai/emotion). It is designed to classify text into various emotional categories.
4
-
5
- It achieves the following results:
6
- - **Validation Accuracy:** 93.55%
7
- - **Test Accuracy:** 93.3%
8
-
9
- ## Model Description
10
-
11
- This model uses the DistilBERT architecture, which is a lighter and faster variant of BERT. It has been fine-tuned specifically for emotion classification, making it suitable for tasks such as sentiment analysis, customer feedback analysis, and user emotion detection.
 
 
 
 
12
 
13
- ### Key Features
14
- - Efficient and lightweight for deployment.
15
- - High accuracy for emotion detection tasks.
16
- - Pretrained on a diverse dataset and fine-tuned for high specificity to emotions.
17
-
18
- ## Intended Uses & Limitations
19
-
20
- ### Intended Uses
21
- - Emotion analysis in text data.
22
- - Sentiment detection in customer reviews, tweets, or user feedback.
23
- - Psychological or behavioral studies to analyze emotional tone in communications.
24
-
25
- ### Limitations
26
- - May not generalize well to datasets with highly domain-specific language.
27
- - Performance might degrade with noisy or ambiguous text inputs.
28
- - The model is English-specific and may not perform well on non-English text.
29
 
30
- ## Training and Evaluation Data
 
 
 
31
 
32
- ### Training Dataset
33
- - **Dataset:** [dair-ai/emotion](https://huggingface.co/datasets/dair-ai/emotion)
34
- - **Training Set Size:** 16,000 examples
35
- - **Dataset Description:** The dataset contains English sentences labeled with six emotional categories: anger, joy, optimism, sadness, fear, and disgust.
36
 
37
- ### Results
38
- - **Training Time:** ~190 seconds
39
- - **Training Loss:** 0.2034
40
- - **Validation Accuracy:** 93.55%
41
- - **Test Accuracy:** 93.3%
42
 
43
- ## Training Procedure
44
 
45
- ### Hyperparameters
46
- - **Learning Rate:** 5e-05
47
- - **Batch Size:** 16 (train and evaluation)
48
- - **Epochs:** 3
49
- - **Seed:** 42
50
- - **Optimizer:** AdamW (betas=(0.9,0.999), epsilon=1e-08)
51
- - **Learning Rate Scheduler:** Linear
52
- - **Mixed Precision Training:** Native AMP
53
 
54
- ### Training and Validation Results
55
 
56
- | Epoch | Training Loss | Validation Loss | Validation Accuracy |
57
- |-------|---------------|-----------------|---------------------|
58
- | 1 | 0.2293 | 0.1746 | 93.35% |
59
- | 2 | 0.1315 | 0.1529 | 93.70% |
60
- | 3 | 0.0798 | 0.1554 | 93.55% |
61
 
62
- ### Test Results
63
- - **Loss:** 0.1642
64
- - **Accuracy:** 93.3%
65
 
66
- ### Performance Metrics
67
- - **Training Speed:** ~252 samples/second
68
- - **Evaluation Speed:** ~1,250 samples/second
69
 
70
- ## Framework and Tools
 
 
 
 
 
 
 
 
71
 
72
- - **Transformers:** 4.46.2
73
- - **PyTorch:** 2.5.1+cu124
74
- - **Datasets:** 3.1.0
75
- - **Tokenizers:** 0.20.3
76
 
77
- ## Usage Example
 
 
 
78
 
79
- ```python
80
- from transformers import pipeline
81
 
82
- # Load the fine-tuned model
83
- classifier = pipeline("text-classification", model="your-model-path")
84
 
85
- # Example usage
86
- text = "I am so happy to see you!"
87
- emotion = classifier(text)
88
- print(emotion)
89
- ```
 
1
+ ---
2
+ library_name: transformers
3
+ license: apache-2.0
4
+ base_model: distilbert-base-uncased
5
+ tags:
6
+ - generated_from_trainer
7
+ metrics:
8
+ - accuracy
9
+ model-index:
10
+ - name: emotion-classification-model
11
+ results: []
12
+ ---
13
+
14
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
15
+ should probably proofread and complete it, then remove this comment. -->
16
 
17
+ # emotion-classification-model
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
18
 
19
+ This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
20
+ It achieves the following results on the evaluation set:
21
+ - Loss: 0.1390
22
+ - Accuracy: 0.9425
23
 
24
+ ## Model description
 
 
 
25
 
26
+ More information needed
 
 
 
 
27
 
28
+ ## Intended uses & limitations
29
 
30
+ More information needed
 
 
 
 
 
 
 
31
 
32
+ ## Training and evaluation data
33
 
34
+ More information needed
 
 
 
 
35
 
36
+ ## Training procedure
 
 
37
 
38
+ ### Training hyperparameters
 
 
39
 
40
+ The following hyperparameters were used during training:
41
+ - learning_rate: 5e-05
42
+ - train_batch_size: 16
43
+ - eval_batch_size: 16
44
+ - seed: 42
45
+ - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
46
+ - lr_scheduler_type: linear
47
+ - num_epochs: 2
48
+ - mixed_precision_training: Native AMP
49
 
50
+ ### Training results
 
 
 
51
 
52
+ | Training Loss | Epoch | Step | Validation Loss | Accuracy |
53
+ |:-------------:|:-----:|:----:|:---------------:|:--------:|
54
+ | 0.2225 | 1.0 | 1000 | 0.1696 | 0.932 |
55
+ | 0.1165 | 2.0 | 2000 | 0.1390 | 0.9425 |
56
 
 
 
57
 
58
+ ### Framework versions
 
59
 
60
+ - Transformers 4.46.2
61
+ - Pytorch 2.5.1+cu124
62
+ - Datasets 3.1.0
63
+ - Tokenizers 0.20.3
 
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:1a025bd6eff877c2ad92507ac92faa56f193686e714d94c37b2dbd5f01191076
3
  size 267844872
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e68b65e28863ce996beb86107816989d08232f877e180ad88fc32ffa6b2fa97c
3
  size 267844872