superone001 commited on
Commit
434d0ee
·
verified ·
1 Parent(s): f8731d3

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +134 -65
README.md CHANGED
@@ -1,65 +1,134 @@
1
- ---
2
- library_name: transformers
3
- license: apache-2.0
4
- base_model: distilbert-base-uncased
5
- tags:
6
- - generated_from_trainer
7
- metrics:
8
- - accuracy
9
- - f1
10
- model-index:
11
- - name: my-test-model
12
- results: []
13
- ---
14
-
15
- <!-- This model card has been generated automatically according to the information the Trainer had access to. You
16
- should probably proofread and complete it, then remove this comment. -->
17
-
18
- # my-test-model
19
-
20
- This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
21
- It achieves the following results on the evaluation set:
22
- - Loss: 0.3448
23
- - Accuracy: 0.9130
24
- - F1: 0.9130
25
-
26
- ## Model description
27
-
28
- More information needed
29
-
30
- ## Intended uses & limitations
31
-
32
- More information needed
33
-
34
- ## Training and evaluation data
35
-
36
- More information needed
37
-
38
- ## Training procedure
39
-
40
- ### Training hyperparameters
41
-
42
- The following hyperparameters were used during training:
43
- - learning_rate: 2e-05
44
- - train_batch_size: 16
45
- - eval_batch_size: 64
46
- - seed: 42
47
- - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
48
- - lr_scheduler_type: linear
49
- - num_epochs: 3
50
-
51
- ### Training results
52
-
53
- | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
54
- |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
55
- | 0.2497 | 1.0 | 1563 | 0.2486 | 0.9026 | 0.9024 |
56
- | 0.1496 | 2.0 | 3126 | 0.2896 | 0.9135 | 0.9135 |
57
- | 0.1222 | 3.0 | 4689 | 0.3448 | 0.9130 | 0.9130 |
58
-
59
-
60
- ### Framework versions
61
-
62
- - Transformers 4.52.3
63
- - Pytorch 2.7.0+cu128
64
- - Datasets 3.6.0
65
- - Tokenizers 0.21.1
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: transformers
3
+ license: apache-2.0
4
+ base_model: distilbert-base-uncased
5
+ tags:
6
+ - generated_from_trainer
7
+ metrics:
8
+ - accuracy
9
+ - f1
10
+ model-index:
11
+ - name: my-test-model
12
+ results: []
13
+ datasets:
14
+ - stanfordnlp/imdb
15
+ ---
16
+
17
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
18
+ should probably proofread and complete it, then remove this comment. -->
19
+
20
+ # my-test-model
21
+
22
+ This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
23
+ It achieves the following results on the evaluation set:
24
+ - Loss: 0.3448
25
+ - Accuracy: 0.9130
26
+ - F1: 0.9130
27
+
28
+ ## Model description
29
+
30
+ This model is a fine-tuned version of DistilBERT-base-uncased for binary sentiment analysis on movie reviews. Key specifications:
31
+
32
+ Task: Sentiment classification (positive/negative)
33
+
34
+ Base Architecture: 6-layer distilled Transformer model
35
+
36
+ Parameters: ~66 million (standard DistilBERT configuration)
37
+
38
+ Output Labels:
39
+
40
+ 0 "NEGATIVE"
41
+
42
+ 1 "POSITIVE"
43
+
44
+ ## Intended uses & limitations
45
+
46
+ Acceptable Use Cases ✅
47
+ Sentiment analysis of English movie reviews
48
+
49
+ Educational/research purposes for text classification
50
+
51
+ Baseline model for entertainment industry applications
52
+
53
+ Integration in sentiment analysis pipelines
54
+
55
+ Limitations ⚠️
56
+ Language Restriction: Only supports English text
57
+
58
+ Domain Specificity: Optimized for movie reviews - performance degrades on other text types
59
+
60
+ Bias Risks: May reflect demographic/cultural biases in training data
61
+
62
+ Length Constraint: Maximum input length of 256 tokens (longer texts are truncated)
63
+
64
+ Not Suitable For:
65
+
66
+ Multilingual text analysis
67
+
68
+ Sarcasm/irony detection
69
+
70
+ Fine-grained sentiment analysis (e.g., detecting anger, excitement)
71
+
72
+ ## Training and evaluation data
73
+
74
+ Training Data
75
+ Dataset: IMDB Movie Reviews
76
+
77
+ Size: 25,000 labeled examples
78
+
79
+ Class Distribution:
80
+
81
+ Positive: 12,500 (50%)
82
+
83
+ Negative: 12,500 (50%)
84
+
85
+ Preprocessing:
86
+
87
+ Lowercasing
88
+
89
+ DistilBERT tokenization (WordPiece)
90
+
91
+ Dynamic padding
92
+
93
+ Evaluation Data
94
+ Test Set: Official IMDB test split (25,000 examples)
95
+
96
+ ## Training procedure
97
+
98
+ TrainingArguments(
99
+ num_train_epochs=3,
100
+ per_device_train_batch_size=16,
101
+ per_device_eval_batch_size=64,
102
+ learning_rate=2e-5,
103
+ weight_decay=0.01,
104
+ evaluation_strategy="epoch",
105
+ save_strategy="epoch",
106
+ metric_for_best_model="accuracy"
107
+ )
108
+
109
+ ### Training hyperparameters
110
+
111
+ The following hyperparameters were used during training:
112
+ - learning_rate: 2e-05
113
+ - train_batch_size: 16
114
+ - eval_batch_size: 64
115
+ - seed: 42
116
+ - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
117
+ - lr_scheduler_type: linear
118
+ - num_epochs: 3
119
+
120
+ ### Training results
121
+
122
+ | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
123
+ |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
124
+ | 0.2497 | 1.0 | 1563 | 0.2486 | 0.9026 | 0.9024 |
125
+ | 0.1496 | 2.0 | 3126 | 0.2896 | 0.9135 | 0.9135 |
126
+ | 0.1222 | 3.0 | 4689 | 0.3448 | 0.9130 | 0.9130 |
127
+
128
+
129
+ ### Framework versions
130
+
131
+ - Transformers 4.52.3
132
+ - Pytorch 2.7.0+cu128
133
+ - Datasets 3.6.0
134
+ - Tokenizers 0.21.1