WolfPTL commited on
Commit
151a674
·
verified ·
1 Parent(s): 53f06b8

Initial upload - mBERT fine-tuned on r/Singapore

Browse files
README.md ADDED
@@ -0,0 +1,88 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language: en
3
+ tags:
4
+ - sentiment-analysis
5
+ - singapore
6
+ - singlish
7
+ - regression
8
+ license: mit
9
+ ---
10
+
11
+ # Singapore Sentiment Analyzer - MULTILINGUAL_BERT (Calibrated)
12
+
13
+ Fine-tuned sentiment analysis model for Singapore social media, **with post-training calibration** for improved accuracy.
14
+
15
+ ## 🎯 Performance
16
+
17
+ | Metric | Before Calibration | After Calibration | Improvement |
18
+ |--------|-------------------|-------------------|-------------|
19
+ | **Accuracy** | 52.6% | **64.0%** | **+11.4%** |
20
+ | **MAE** | 0.126 | **0.104** | **-0.022** |
21
+ | **RMSE** | 0.168 | **0.141** | **-0.027** |
22
+
23
+ ## 📊 Sentiment Scale
24
+
25
+ | Score | Category |
26
+ |-------|----------|
27
+ | 0.00 - 0.20 | Very Negative |
28
+ | 0.21 - 0.40 | Negative |
29
+ | 0.41 - 0.60 | Neutral |
30
+ | 0.61 - 0.80 | Positive |
31
+ | 0.81 - 1.00 | Very Positive |
32
+
33
+ ## 🚀 Quick Start
34
+
35
+ ```python
36
+ from transformers import AutoTokenizer
37
+ from modeling_calibrated import CalibratedRegressionModel
38
+
39
+ # Load model (calibration is automatic!)
40
+ model_name = "your-username/multilingual_bert-singapore-sentiment"
41
+ tokenizer = AutoTokenizer.from_pretrained(model_name)
42
+ model = CalibratedRegressionModel.from_pretrained(model_name)
43
+
44
+ # Predict sentiment
45
+ text = "This chicken rice is damn shiok sia!"
46
+ result = model.predict_sentiment(text, tokenizer)
47
+
48
+ print(f"Score: {result['score']:.3f}") # 0.875
49
+ print(f"Category: {result['category']}") # "Very Positive"
50
+ ```
51
+
52
+ ## 💡 What is Calibration?
53
+
54
+ After fine-tuning, we applied **isotonic regression calibration** on a validation set. This corrects systematic bias patterns where the model was:
55
+ - Over-predicting on negative examples
56
+ - Under-predicting on positive examples
57
+ - Struggling with boundary cases (e.g., neutral vs negative)
58
+
59
+ The calibration layer is built into the model - you get calibrated predictions automatically!
60
+
61
+ ## 📚 Training Details
62
+
63
+ - **Base model**: `cardiffnlp/twitter-roberta-base-sentiment-latest`
64
+ - **Training data**: 49,521 Singapore Reddit posts/comments
65
+ - **Fine-tuning**: 5 epochs, MSE loss, learning rate 2e-5
66
+ - **Calibration**: Isotonic regression on 500-sample validation set
67
+
68
+ ## 🌏 Singapore Context
69
+
70
+ This model understands Singlish patterns and Singapore-specific terminology:
71
+ - Particles: lah, lor, leh, sia
72
+ - Slang: shiok, sian, jialat, paiseh
73
+ - Local context: HDB, MRT, hawker, kopitiam
74
+
75
+ ## 📝 Citation
76
+
77
+ ```bibtex
78
+ @misc{multilingual_bert-singapore-calibrated,
79
+ title = {Singapore Sentiment Analyzer - MULTILINGUAL_BERT (Calibrated)},
80
+ year = {2026},
81
+ publisher = {HuggingFace},
82
+ url = {https://huggingface.co/your-username/multilingual_bert-singapore-sentiment}
83
+ }
84
+ ```
85
+
86
+ ## 📄 License
87
+
88
+ MIT License - Free for commercial and non-commercial use.
calibrator_config.json ADDED
@@ -0,0 +1,212 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "method": "isotonic",
3
+ "version": "1.0",
4
+ "mapping": {
5
+ "input_scores": [
6
+ 0.0,
7
+ 0.01,
8
+ 0.02,
9
+ 0.03,
10
+ 0.04,
11
+ 0.05,
12
+ 0.06,
13
+ 0.07,
14
+ 0.08,
15
+ 0.09,
16
+ 0.1,
17
+ 0.11,
18
+ 0.12,
19
+ 0.13,
20
+ 0.14,
21
+ 0.15,
22
+ 0.16,
23
+ 0.17,
24
+ 0.18,
25
+ 0.19,
26
+ 0.2,
27
+ 0.21,
28
+ 0.22,
29
+ 0.23,
30
+ 0.24,
31
+ 0.25,
32
+ 0.26,
33
+ 0.27,
34
+ 0.28,
35
+ 0.29,
36
+ 0.3,
37
+ 0.31,
38
+ 0.32,
39
+ 0.33,
40
+ 0.34,
41
+ 0.35000000000000003,
42
+ 0.36,
43
+ 0.37,
44
+ 0.38,
45
+ 0.39,
46
+ 0.4,
47
+ 0.41000000000000003,
48
+ 0.42,
49
+ 0.43,
50
+ 0.44,
51
+ 0.45,
52
+ 0.46,
53
+ 0.47000000000000003,
54
+ 0.48,
55
+ 0.49,
56
+ 0.5,
57
+ 0.51,
58
+ 0.52,
59
+ 0.53,
60
+ 0.54,
61
+ 0.55,
62
+ 0.56,
63
+ 0.5700000000000001,
64
+ 0.58,
65
+ 0.59,
66
+ 0.6,
67
+ 0.61,
68
+ 0.62,
69
+ 0.63,
70
+ 0.64,
71
+ 0.65,
72
+ 0.66,
73
+ 0.67,
74
+ 0.68,
75
+ 0.6900000000000001,
76
+ 0.7000000000000001,
77
+ 0.71,
78
+ 0.72,
79
+ 0.73,
80
+ 0.74,
81
+ 0.75,
82
+ 0.76,
83
+ 0.77,
84
+ 0.78,
85
+ 0.79,
86
+ 0.8,
87
+ 0.81,
88
+ 0.8200000000000001,
89
+ 0.8300000000000001,
90
+ 0.84,
91
+ 0.85,
92
+ 0.86,
93
+ 0.87,
94
+ 0.88,
95
+ 0.89,
96
+ 0.9,
97
+ 0.91,
98
+ 0.92,
99
+ 0.93,
100
+ 0.9400000000000001,
101
+ 0.9500000000000001,
102
+ 0.96,
103
+ 0.97,
104
+ 0.98,
105
+ 0.99,
106
+ 1.0
107
+ ],
108
+ "output_scores": [
109
+ 0.24555555555555555,
110
+ 0.24555555555555555,
111
+ 0.24555555555555555,
112
+ 0.24555555555555555,
113
+ 0.24555555555555555,
114
+ 0.24555555555555555,
115
+ 0.24555555555555555,
116
+ 0.24555555555555555,
117
+ 0.24555555555555555,
118
+ 0.24555555555555555,
119
+ 0.24555555555555555,
120
+ 0.24555555555555555,
121
+ 0.24555555555555555,
122
+ 0.24555555555555555,
123
+ 0.24555555555555555,
124
+ 0.24555555555555555,
125
+ 0.24555555555555555,
126
+ 0.24555555555555555,
127
+ 0.24555555555555555,
128
+ 0.24555555555555555,
129
+ 0.24555555555555555,
130
+ 0.24555555555555555,
131
+ 0.2823202749545064,
132
+ 0.3233333333333333,
133
+ 0.3245833333333333,
134
+ 0.3245833333333333,
135
+ 0.3245833333333333,
136
+ 0.3245833333333333,
137
+ 0.3245833333333333,
138
+ 0.3245833333333333,
139
+ 0.3245833333333333,
140
+ 0.3695652173913043,
141
+ 0.3695652173913043,
142
+ 0.3695652173913043,
143
+ 0.3695652173913043,
144
+ 0.3917777777777778,
145
+ 0.3917777777777778,
146
+ 0.42019607843137247,
147
+ 0.42019607843137247,
148
+ 0.42019607843137247,
149
+ 0.4494183260117991,
150
+ 0.4502631578947368,
151
+ 0.4502631578947368,
152
+ 0.4502631578947368,
153
+ 0.4502631578947368,
154
+ 0.4502631578947368,
155
+ 0.4578931542068387,
156
+ 0.468095238095238,
157
+ 0.468095238095238,
158
+ 0.48166666666666663,
159
+ 0.48166666666666663,
160
+ 0.48423076923076924,
161
+ 0.48423076923076924,
162
+ 0.48423076923076924,
163
+ 0.48423076923076924,
164
+ 0.48423076923076924,
165
+ 0.48423076923076924,
166
+ 0.48599999999999993,
167
+ 0.4973170731707317,
168
+ 0.4973170731707317,
169
+ 0.4973170731707317,
170
+ 0.4973170731707317,
171
+ 0.4973170731707317,
172
+ 0.4973170731707317,
173
+ 0.4973170731707317,
174
+ 0.4973170731707317,
175
+ 0.4973170731707317,
176
+ 0.4973170731707317,
177
+ 0.4973170731707317,
178
+ 0.4985785251534141,
179
+ 0.55,
180
+ 0.55,
181
+ 0.55,
182
+ 0.5504437504275843,
183
+ 0.5744712331299685,
184
+ 0.5775,
185
+ 0.5775,
186
+ 0.5777953204369858,
187
+ 0.5783333333333334,
188
+ 0.5783333333333334,
189
+ 0.5783333333333334,
190
+ 0.5783333333333334,
191
+ 0.6029676926947565,
192
+ 0.6063636363636363,
193
+ 0.6063636363636363,
194
+ 0.6063636363636363,
195
+ 0.6162275664506803,
196
+ 0.63,
197
+ 0.63,
198
+ 0.63,
199
+ 0.63,
200
+ 0.63,
201
+ 0.63,
202
+ 0.63,
203
+ 0.63,
204
+ 0.63,
205
+ 0.63,
206
+ 0.63,
207
+ 0.63,
208
+ 0.63,
209
+ 0.63
210
+ ]
211
+ }
212
+ }
checkpoint-3095/model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1db69d2b442fc7f8cdb617daf9d036f153d2a86c40944a6e42c6990cebf0f1ca
3
+ size 669452284
checkpoint-3095/optimizer.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7429a9084c27376bb5b91e918deb9da87d4d85b61be6ebf4660b5cad13c60fc3
3
+ size 1339025658
checkpoint-3095/rng_state.pth ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ec1d5b7b9bf8eb4b2f98128c10f9a0ee63e3904bb1961798817eb980b35e8d1e
3
+ size 14244
checkpoint-3095/scaler.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e35985b879f4b585e684547e7247dd8e9b77300766a13ccad5bdde2533af4a17
3
+ size 988
checkpoint-3095/scheduler.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:afc0cd0e8762e5186308c8d34c7058e9ae393685de75555013c5819fcdb77637
3
+ size 1064
checkpoint-3095/tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
checkpoint-3095/tokenizer_config.json ADDED
@@ -0,0 +1,15 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "backend": "tokenizers",
3
+ "cls_token": "[CLS]",
4
+ "do_lower_case": true,
5
+ "is_local": false,
6
+ "mask_token": "[MASK]",
7
+ "max_len": 512,
8
+ "model_max_length": 512,
9
+ "pad_token": "[PAD]",
10
+ "sep_token": "[SEP]",
11
+ "strip_accents": null,
12
+ "tokenize_chinese_chars": true,
13
+ "tokenizer_class": "BertTokenizer",
14
+ "unk_token": "[UNK]"
15
+ }
checkpoint-3095/trainer_state.json ADDED
@@ -0,0 +1,244 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "best_global_step": null,
3
+ "best_metric": null,
4
+ "best_model_checkpoint": null,
5
+ "epoch": 1.0,
6
+ "eval_steps": 500,
7
+ "global_step": 3095,
8
+ "is_hyper_param_search": false,
9
+ "is_local_process_zero": true,
10
+ "is_world_process_zero": true,
11
+ "log_history": [
12
+ {
13
+ "epoch": 0.03231017770597738,
14
+ "grad_norm": 2.219433546066284,
15
+ "learning_rate": 3.166397415185784e-06,
16
+ "loss": 0.13468141555786134,
17
+ "step": 100
18
+ },
19
+ {
20
+ "epoch": 0.06462035541195477,
21
+ "grad_norm": 1.7088854312896729,
22
+ "learning_rate": 6.397415185783522e-06,
23
+ "loss": 0.08345352172851563,
24
+ "step": 200
25
+ },
26
+ {
27
+ "epoch": 0.09693053311793215,
28
+ "grad_norm": 1.8784979581832886,
29
+ "learning_rate": 9.62843295638126e-06,
30
+ "loss": 0.07643356323242187,
31
+ "step": 300
32
+ },
33
+ {
34
+ "epoch": 0.12924071082390953,
35
+ "grad_norm": 3.160496234893799,
36
+ "learning_rate": 1.2859450726979e-05,
37
+ "loss": 0.07728703022003174,
38
+ "step": 400
39
+ },
40
+ {
41
+ "epoch": 0.16155088852988692,
42
+ "grad_norm": 1.793244481086731,
43
+ "learning_rate": 1.609046849757674e-05,
44
+ "loss": 0.0715451717376709,
45
+ "step": 500
46
+ },
47
+ {
48
+ "epoch": 0.1938610662358643,
49
+ "grad_norm": 0.49027326703071594,
50
+ "learning_rate": 1.9321486268174476e-05,
51
+ "loss": 0.07877041339874268,
52
+ "step": 600
53
+ },
54
+ {
55
+ "epoch": 0.22617124394184168,
56
+ "grad_norm": 0.728702962398529,
57
+ "learning_rate": 1.999007830695722e-05,
58
+ "loss": 0.07102310180664062,
59
+ "step": 700
60
+ },
61
+ {
62
+ "epoch": 0.25848142164781907,
63
+ "grad_norm": 2.55074405670166,
64
+ "learning_rate": 1.9949097313414066e-05,
65
+ "loss": 0.074349045753479,
66
+ "step": 800
67
+ },
68
+ {
69
+ "epoch": 0.29079159935379645,
70
+ "grad_norm": 3.9224605560302734,
71
+ "learning_rate": 1.9876486114300215e-05,
72
+ "loss": 0.07766849994659424,
73
+ "step": 900
74
+ },
75
+ {
76
+ "epoch": 0.32310177705977383,
77
+ "grad_norm": 0.9257252216339111,
78
+ "learning_rate": 1.9772475555398188e-05,
79
+ "loss": 0.0694255781173706,
80
+ "step": 1000
81
+ },
82
+ {
83
+ "epoch": 0.3554119547657512,
84
+ "grad_norm": 0.8081497550010681,
85
+ "learning_rate": 1.9637396307446846e-05,
86
+ "loss": 0.07494896411895752,
87
+ "step": 1100
88
+ },
89
+ {
90
+ "epoch": 0.3877221324717286,
91
+ "grad_norm": 2.14691424369812,
92
+ "learning_rate": 1.9471677814871786e-05,
93
+ "loss": 0.0675746250152588,
94
+ "step": 1200
95
+ },
96
+ {
97
+ "epoch": 0.420032310177706,
98
+ "grad_norm": 0.38400450348854065,
99
+ "learning_rate": 1.927584693049412e-05,
100
+ "loss": 0.07241044998168945,
101
+ "step": 1300
102
+ },
103
+ {
104
+ "epoch": 0.45234248788368336,
105
+ "grad_norm": 0.580016553401947,
106
+ "learning_rate": 1.9050526240558083e-05,
107
+ "loss": 0.07040606498718262,
108
+ "step": 1400
109
+ },
110
+ {
111
+ "epoch": 0.48465266558966075,
112
+ "grad_norm": 1.0501762628555298,
113
+ "learning_rate": 1.8796432085402662e-05,
114
+ "loss": 0.07063678741455078,
115
+ "step": 1500
116
+ },
117
+ {
118
+ "epoch": 0.5169628432956381,
119
+ "grad_norm": 0.9808917045593262,
120
+ "learning_rate": 1.8514372282069805e-05,
121
+ "loss": 0.067080078125,
122
+ "step": 1600
123
+ },
124
+ {
125
+ "epoch": 0.5492730210016155,
126
+ "grad_norm": 0.8777210116386414,
127
+ "learning_rate": 1.8205243556089643e-05,
128
+ "loss": 0.06770069122314454,
129
+ "step": 1700
130
+ },
131
+ {
132
+ "epoch": 0.5815831987075929,
133
+ "grad_norm": 1.6874562501907349,
134
+ "learning_rate": 1.7870028690607476e-05,
135
+ "loss": 0.07079707622528077,
136
+ "step": 1800
137
+ },
138
+ {
139
+ "epoch": 0.6138933764135702,
140
+ "grad_norm": 1.3245418071746826,
141
+ "learning_rate": 1.7509793401916104e-05,
142
+ "loss": 0.06404186248779296,
143
+ "step": 1900
144
+ },
145
+ {
146
+ "epoch": 0.6462035541195477,
147
+ "grad_norm": 0.3585558831691742,
148
+ "learning_rate": 1.7125682951326795e-05,
149
+ "loss": 0.06824737071990966,
150
+ "step": 2000
151
+ },
152
+ {
153
+ "epoch": 0.678513731825525,
154
+ "grad_norm": 0.8804681897163391,
155
+ "learning_rate": 1.671891850415046e-05,
156
+ "loss": 0.07004157543182372,
157
+ "step": 2100
158
+ },
159
+ {
160
+ "epoch": 0.7108239095315024,
161
+ "grad_norm": 0.6549601554870605,
162
+ "learning_rate": 1.629079324736454e-05,
163
+ "loss": 0.06939639568328858,
164
+ "step": 2200
165
+ },
166
+ {
167
+ "epoch": 0.7431340872374798,
168
+ "grad_norm": 1.4602007865905762,
169
+ "learning_rate": 1.584266827830838e-05,
170
+ "loss": 0.06469368457794189,
171
+ "step": 2300
172
+ },
173
+ {
174
+ "epoch": 0.7754442649434572,
175
+ "grad_norm": 1.0985392332077026,
176
+ "learning_rate": 1.537596827747772e-05,
177
+ "loss": 0.07550141334533692,
178
+ "step": 2400
179
+ },
180
+ {
181
+ "epoch": 0.8077544426494345,
182
+ "grad_norm": 0.9523833394050598,
183
+ "learning_rate": 1.4892176979175388e-05,
184
+ "loss": 0.06276164531707763,
185
+ "step": 2500
186
+ },
187
+ {
188
+ "epoch": 0.840064620355412,
189
+ "grad_norm": 0.4565122723579407,
190
+ "learning_rate": 1.4392832454417938e-05,
191
+ "loss": 0.0644590950012207,
192
+ "step": 2600
193
+ },
194
+ {
195
+ "epoch": 0.8723747980613893,
196
+ "grad_norm": 2.104994058609009,
197
+ "learning_rate": 1.387952222109479e-05,
198
+ "loss": 0.06694219589233398,
199
+ "step": 2700
200
+ },
201
+ {
202
+ "epoch": 0.9046849757673667,
203
+ "grad_norm": 1.4507120847702026,
204
+ "learning_rate": 1.3353878196925727e-05,
205
+ "loss": 0.059137086868286136,
206
+ "step": 2800
207
+ },
208
+ {
209
+ "epoch": 0.9369951534733441,
210
+ "grad_norm": 1.1051520109176636,
211
+ "learning_rate": 1.2817571511262256e-05,
212
+ "loss": 0.06713937282562256,
213
+ "step": 2900
214
+ },
215
+ {
216
+ "epoch": 0.9693053311793215,
217
+ "grad_norm": 1.7327321767807007,
218
+ "learning_rate": 1.2272307192227245e-05,
219
+ "loss": 0.0657884407043457,
220
+ "step": 3000
221
+ }
222
+ ],
223
+ "logging_steps": 100,
224
+ "max_steps": 6190,
225
+ "num_input_tokens_seen": 0,
226
+ "num_train_epochs": 2,
227
+ "save_steps": 500,
228
+ "stateful_callbacks": {
229
+ "TrainerControl": {
230
+ "args": {
231
+ "should_epoch_stop": false,
232
+ "should_evaluate": false,
233
+ "should_log": false,
234
+ "should_save": true,
235
+ "should_training_stop": false
236
+ },
237
+ "attributes": {}
238
+ }
239
+ },
240
+ "total_flos": 0.0,
241
+ "train_batch_size": 16,
242
+ "trial_name": null,
243
+ "trial_params": null
244
+ }
checkpoint-3095/training_args.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2181dd01650d66d3bf665d833ec71debd0d4333d3bb123add14eb6c07cff376f
3
+ size 4856
checkpoint-6190/model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d23f493e78965a0daa82d228ec41d363f64b452fda8c540d4abff4eabce5d3db
3
+ size 669452284
checkpoint-6190/optimizer.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:26baef79c4adaee5f18a372262d1f87d77cc6de95ecbe8481b131c0379593c75
3
+ size 1339025658
checkpoint-6190/rng_state.pth ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:28ce47c620300bd9f2ffa8d5f2e1bae45452602cfdd4d5fc78dfcc639890e3ae
3
+ size 14244
checkpoint-6190/scaler.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3bf7294856853e1a36d7af9256daa01a485d2e36a958b03789542891ac993896
3
+ size 988
checkpoint-6190/scheduler.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b914bc6c2f6a2bf6812cf6f7dca6bf8a0b74e343fa07103bd9c0a9d6d0fa956e
3
+ size 1064
checkpoint-6190/tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
checkpoint-6190/tokenizer_config.json ADDED
@@ -0,0 +1,15 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "backend": "tokenizers",
3
+ "cls_token": "[CLS]",
4
+ "do_lower_case": true,
5
+ "is_local": false,
6
+ "mask_token": "[MASK]",
7
+ "max_len": 512,
8
+ "model_max_length": 512,
9
+ "pad_token": "[PAD]",
10
+ "sep_token": "[SEP]",
11
+ "strip_accents": null,
12
+ "tokenize_chinese_chars": true,
13
+ "tokenizer_class": "BertTokenizer",
14
+ "unk_token": "[UNK]"
15
+ }
checkpoint-6190/trainer_state.json ADDED
@@ -0,0 +1,461 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "best_global_step": null,
3
+ "best_metric": null,
4
+ "best_model_checkpoint": null,
5
+ "epoch": 2.0,
6
+ "eval_steps": 500,
7
+ "global_step": 6190,
8
+ "is_hyper_param_search": false,
9
+ "is_local_process_zero": true,
10
+ "is_world_process_zero": true,
11
+ "log_history": [
12
+ {
13
+ "epoch": 0.03231017770597738,
14
+ "grad_norm": 2.219433546066284,
15
+ "learning_rate": 3.166397415185784e-06,
16
+ "loss": 0.13468141555786134,
17
+ "step": 100
18
+ },
19
+ {
20
+ "epoch": 0.06462035541195477,
21
+ "grad_norm": 1.7088854312896729,
22
+ "learning_rate": 6.397415185783522e-06,
23
+ "loss": 0.08345352172851563,
24
+ "step": 200
25
+ },
26
+ {
27
+ "epoch": 0.09693053311793215,
28
+ "grad_norm": 1.8784979581832886,
29
+ "learning_rate": 9.62843295638126e-06,
30
+ "loss": 0.07643356323242187,
31
+ "step": 300
32
+ },
33
+ {
34
+ "epoch": 0.12924071082390953,
35
+ "grad_norm": 3.160496234893799,
36
+ "learning_rate": 1.2859450726979e-05,
37
+ "loss": 0.07728703022003174,
38
+ "step": 400
39
+ },
40
+ {
41
+ "epoch": 0.16155088852988692,
42
+ "grad_norm": 1.793244481086731,
43
+ "learning_rate": 1.609046849757674e-05,
44
+ "loss": 0.0715451717376709,
45
+ "step": 500
46
+ },
47
+ {
48
+ "epoch": 0.1938610662358643,
49
+ "grad_norm": 0.49027326703071594,
50
+ "learning_rate": 1.9321486268174476e-05,
51
+ "loss": 0.07877041339874268,
52
+ "step": 600
53
+ },
54
+ {
55
+ "epoch": 0.22617124394184168,
56
+ "grad_norm": 0.728702962398529,
57
+ "learning_rate": 1.999007830695722e-05,
58
+ "loss": 0.07102310180664062,
59
+ "step": 700
60
+ },
61
+ {
62
+ "epoch": 0.25848142164781907,
63
+ "grad_norm": 2.55074405670166,
64
+ "learning_rate": 1.9949097313414066e-05,
65
+ "loss": 0.074349045753479,
66
+ "step": 800
67
+ },
68
+ {
69
+ "epoch": 0.29079159935379645,
70
+ "grad_norm": 3.9224605560302734,
71
+ "learning_rate": 1.9876486114300215e-05,
72
+ "loss": 0.07766849994659424,
73
+ "step": 900
74
+ },
75
+ {
76
+ "epoch": 0.32310177705977383,
77
+ "grad_norm": 0.9257252216339111,
78
+ "learning_rate": 1.9772475555398188e-05,
79
+ "loss": 0.0694255781173706,
80
+ "step": 1000
81
+ },
82
+ {
83
+ "epoch": 0.3554119547657512,
84
+ "grad_norm": 0.8081497550010681,
85
+ "learning_rate": 1.9637396307446846e-05,
86
+ "loss": 0.07494896411895752,
87
+ "step": 1100
88
+ },
89
+ {
90
+ "epoch": 0.3877221324717286,
91
+ "grad_norm": 2.14691424369812,
92
+ "learning_rate": 1.9471677814871786e-05,
93
+ "loss": 0.0675746250152588,
94
+ "step": 1200
95
+ },
96
+ {
97
+ "epoch": 0.420032310177706,
98
+ "grad_norm": 0.38400450348854065,
99
+ "learning_rate": 1.927584693049412e-05,
100
+ "loss": 0.07241044998168945,
101
+ "step": 1300
102
+ },
103
+ {
104
+ "epoch": 0.45234248788368336,
105
+ "grad_norm": 0.580016553401947,
106
+ "learning_rate": 1.9050526240558083e-05,
107
+ "loss": 0.07040606498718262,
108
+ "step": 1400
109
+ },
110
+ {
111
+ "epoch": 0.48465266558966075,
112
+ "grad_norm": 1.0501762628555298,
113
+ "learning_rate": 1.8796432085402662e-05,
114
+ "loss": 0.07063678741455078,
115
+ "step": 1500
116
+ },
117
+ {
118
+ "epoch": 0.5169628432956381,
119
+ "grad_norm": 0.9808917045593262,
120
+ "learning_rate": 1.8514372282069805e-05,
121
+ "loss": 0.067080078125,
122
+ "step": 1600
123
+ },
124
+ {
125
+ "epoch": 0.5492730210016155,
126
+ "grad_norm": 0.8777210116386414,
127
+ "learning_rate": 1.8205243556089643e-05,
128
+ "loss": 0.06770069122314454,
129
+ "step": 1700
130
+ },
131
+ {
132
+ "epoch": 0.5815831987075929,
133
+ "grad_norm": 1.6874562501907349,
134
+ "learning_rate": 1.7870028690607476e-05,
135
+ "loss": 0.07079707622528077,
136
+ "step": 1800
137
+ },
138
+ {
139
+ "epoch": 0.6138933764135702,
140
+ "grad_norm": 1.3245418071746826,
141
+ "learning_rate": 1.7509793401916104e-05,
142
+ "loss": 0.06404186248779296,
143
+ "step": 1900
144
+ },
145
+ {
146
+ "epoch": 0.6462035541195477,
147
+ "grad_norm": 0.3585558831691742,
148
+ "learning_rate": 1.7125682951326795e-05,
149
+ "loss": 0.06824737071990966,
150
+ "step": 2000
151
+ },
152
+ {
153
+ "epoch": 0.678513731825525,
154
+ "grad_norm": 0.8804681897163391,
155
+ "learning_rate": 1.671891850415046e-05,
156
+ "loss": 0.07004157543182372,
157
+ "step": 2100
158
+ },
159
+ {
160
+ "epoch": 0.7108239095315024,
161
+ "grad_norm": 0.6549601554870605,
162
+ "learning_rate": 1.629079324736454e-05,
163
+ "loss": 0.06939639568328858,
164
+ "step": 2200
165
+ },
166
+ {
167
+ "epoch": 0.7431340872374798,
168
+ "grad_norm": 1.4602007865905762,
169
+ "learning_rate": 1.584266827830838e-05,
170
+ "loss": 0.06469368457794189,
171
+ "step": 2300
172
+ },
173
+ {
174
+ "epoch": 0.7754442649434572,
175
+ "grad_norm": 1.0985392332077026,
176
+ "learning_rate": 1.537596827747772e-05,
177
+ "loss": 0.07550141334533692,
178
+ "step": 2400
179
+ },
180
+ {
181
+ "epoch": 0.8077544426494345,
182
+ "grad_norm": 0.9523833394050598,
183
+ "learning_rate": 1.4892176979175388e-05,
184
+ "loss": 0.06276164531707763,
185
+ "step": 2500
186
+ },
187
+ {
188
+ "epoch": 0.840064620355412,
189
+ "grad_norm": 0.4565122723579407,
190
+ "learning_rate": 1.4392832454417938e-05,
191
+ "loss": 0.0644590950012207,
192
+ "step": 2600
193
+ },
194
+ {
195
+ "epoch": 0.8723747980613893,
196
+ "grad_norm": 2.104994058609009,
197
+ "learning_rate": 1.387952222109479e-05,
198
+ "loss": 0.06694219589233398,
199
+ "step": 2700
200
+ },
201
+ {
202
+ "epoch": 0.9046849757673667,
203
+ "grad_norm": 1.4507120847702026,
204
+ "learning_rate": 1.3353878196925727e-05,
205
+ "loss": 0.059137086868286136,
206
+ "step": 2800
207
+ },
208
+ {
209
+ "epoch": 0.9369951534733441,
210
+ "grad_norm": 1.1051520109176636,
211
+ "learning_rate": 1.2817571511262256e-05,
212
+ "loss": 0.06713937282562256,
213
+ "step": 2900
214
+ },
215
+ {
216
+ "epoch": 0.9693053311793215,
217
+ "grad_norm": 1.7327321767807007,
218
+ "learning_rate": 1.2272307192227245e-05,
219
+ "loss": 0.0657884407043457,
220
+ "step": 3000
221
+ },
222
+ {
223
+ "epoch": 1.001615508885299,
224
+ "grad_norm": 1.4763927459716797,
225
+ "learning_rate": 1.1719818746083432e-05,
226
+ "loss": 0.06157140254974365,
227
+ "step": 3100
228
+ },
229
+ {
230
+ "epoch": 1.0339256865912763,
231
+ "grad_norm": 0.5653871297836304,
232
+ "learning_rate": 1.1161862646064167e-05,
233
+ "loss": 0.052196812629699704,
234
+ "step": 3200
235
+ },
236
+ {
237
+ "epoch": 1.0662358642972536,
238
+ "grad_norm": 0.6949484944343567,
239
+ "learning_rate": 1.0600212748187441e-05,
240
+ "loss": 0.05160783767700195,
241
+ "step": 3300
242
+ },
243
+ {
244
+ "epoch": 1.098546042003231,
245
+ "grad_norm": 0.5760153532028198,
246
+ "learning_rate": 1.0036654651806548e-05,
247
+ "loss": 0.049915013313293455,
248
+ "step": 3400
249
+ },
250
+ {
251
+ "epoch": 1.1308562197092085,
252
+ "grad_norm": 1.5963973999023438,
253
+ "learning_rate": 9.472980022826234e-06,
254
+ "loss": 0.0530696964263916,
255
+ "step": 3500
256
+ },
257
+ {
258
+ "epoch": 1.1631663974151858,
259
+ "grad_norm": 1.1081798076629639,
260
+ "learning_rate": 8.910980897632122e-06,
261
+ "loss": 0.051336941719055174,
262
+ "step": 3600
263
+ },
264
+ {
265
+ "epoch": 1.1954765751211631,
266
+ "grad_norm": 1.7282005548477173,
267
+ "learning_rate": 8.352443985842219e-06,
268
+ "loss": 0.052054371833801266,
269
+ "step": 3700
270
+ },
271
+ {
272
+ "epoch": 1.2277867528271407,
273
+ "grad_norm": 1.7319399118423462,
274
+ "learning_rate": 7.799144989993374e-06,
275
+ "loss": 0.05039342403411865,
276
+ "step": 3800
277
+ },
278
+ {
279
+ "epoch": 1.260096930533118,
280
+ "grad_norm": 0.6499872207641602,
281
+ "learning_rate": 7.252842960221437e-06,
282
+ "loss": 0.050861058235168455,
283
+ "step": 3900
284
+ },
285
+ {
286
+ "epoch": 1.2924071082390953,
287
+ "grad_norm": 0.6934958100318909,
288
+ "learning_rate": 6.715274701882817e-06,
289
+ "loss": 0.04509526252746582,
290
+ "step": 4000
291
+ },
292
+ {
293
+ "epoch": 1.3247172859450727,
294
+ "grad_norm": 2.537475824356079,
295
+ "learning_rate": 6.188149253896711e-06,
296
+ "loss": 0.04738297939300537,
297
+ "step": 4100
298
+ },
299
+ {
300
+ "epoch": 1.35702746365105,
301
+ "grad_norm": 1.5743188858032227,
302
+ "learning_rate": 5.67822712537558e-06,
303
+ "loss": 0.0544910192489624,
304
+ "step": 4200
305
+ },
306
+ {
307
+ "epoch": 1.3893376413570275,
308
+ "grad_norm": 0.7133609056472778,
309
+ "learning_rate": 5.176830762786425e-06,
310
+ "loss": 0.04516469955444336,
311
+ "step": 4300
312
+ },
313
+ {
314
+ "epoch": 1.4216478190630049,
315
+ "grad_norm": 0.8588969111442566,
316
+ "learning_rate": 4.6907682369936616e-06,
317
+ "loss": 0.05183727741241455,
318
+ "step": 4400
319
+ },
320
+ {
321
+ "epoch": 1.4539579967689822,
322
+ "grad_norm": 0.3707756996154785,
323
+ "learning_rate": 4.221584839708363e-06,
324
+ "loss": 0.05028769016265869,
325
+ "step": 4500
326
+ },
327
+ {
328
+ "epoch": 1.4862681744749597,
329
+ "grad_norm": 2.329963445663452,
330
+ "learning_rate": 3.770772200456203e-06,
331
+ "loss": 0.051742258071899416,
332
+ "step": 4600
333
+ },
334
+ {
335
+ "epoch": 1.518578352180937,
336
+ "grad_norm": 1.0984081029891968,
337
+ "learning_rate": 3.339763544383562e-06,
338
+ "loss": 0.05167407512664795,
339
+ "step": 4700
340
+ },
341
+ {
342
+ "epoch": 1.5508885298869144,
343
+ "grad_norm": 0.6611376404762268,
344
+ "learning_rate": 2.929929135743055e-06,
345
+ "loss": 0.04872979164123535,
346
+ "step": 4800
347
+ },
348
+ {
349
+ "epoch": 1.5831987075928917,
350
+ "grad_norm": 1.3234156370162964,
351
+ "learning_rate": 2.542571921544533e-06,
352
+ "loss": 0.051429886817932126,
353
+ "step": 4900
354
+ },
355
+ {
356
+ "epoch": 1.615508885298869,
357
+ "grad_norm": 1.6327112913131714,
358
+ "learning_rate": 2.1789233892213234e-06,
359
+ "loss": 0.05031768321990967,
360
+ "step": 5000
361
+ },
362
+ {
363
+ "epoch": 1.6478190630048464,
364
+ "grad_norm": 0.8545958995819092,
365
+ "learning_rate": 1.84013965148099e-06,
366
+ "loss": 0.04621196746826172,
367
+ "step": 5100
368
+ },
369
+ {
370
+ "epoch": 1.680129240710824,
371
+ "grad_norm": 0.8000152111053467,
372
+ "learning_rate": 1.5272977707877135e-06,
373
+ "loss": 0.050682687759399415,
374
+ "step": 5200
375
+ },
376
+ {
377
+ "epoch": 1.7124394184168013,
378
+ "grad_norm": 0.9961308240890503,
379
+ "learning_rate": 1.2413923351614643e-06,
380
+ "loss": 0.04989287853240967,
381
+ "step": 5300
382
+ },
383
+ {
384
+ "epoch": 1.7447495961227788,
385
+ "grad_norm": 0.6385215520858765,
386
+ "learning_rate": 9.833322961802383e-07,
387
+ "loss": 0.048794183731079105,
388
+ "step": 5400
389
+ },
390
+ {
391
+ "epoch": 1.7770597738287561,
392
+ "grad_norm": 1.6868358850479126,
393
+ "learning_rate": 7.539380792379569e-07,
394
+ "loss": 0.04847686290740967,
395
+ "step": 5500
396
+ },
397
+ {
398
+ "epoch": 1.8093699515347335,
399
+ "grad_norm": 0.4614435136318207,
400
+ "learning_rate": 5.539389752451485e-07,
401
+ "loss": 0.048504509925842286,
402
+ "step": 5600
403
+ },
404
+ {
405
+ "epoch": 1.8416801292407108,
406
+ "grad_norm": 1.719391107559204,
407
+ "learning_rate": 3.839708220646832e-07,
408
+ "loss": 0.046063599586486814,
409
+ "step": 5700
410
+ },
411
+ {
412
+ "epoch": 1.8739903069466881,
413
+ "grad_norm": 2.2700130939483643,
414
+ "learning_rate": 2.4457398305377857e-07,
415
+ "loss": 0.046228761672973635,
416
+ "step": 5800
417
+ },
418
+ {
419
+ "epoch": 1.9063004846526654,
420
+ "grad_norm": 1.16710364818573,
421
+ "learning_rate": 1.361916291388954e-07,
422
+ "loss": 0.04596792221069336,
423
+ "step": 5900
424
+ },
425
+ {
426
+ "epoch": 1.938610662358643,
427
+ "grad_norm": 0.6629289388656616,
428
+ "learning_rate": 5.916832988514754e-08,
429
+ "loss": 0.04853534698486328,
430
+ "step": 6000
431
+ },
432
+ {
433
+ "epoch": 1.9709208400646203,
434
+ "grad_norm": 0.6959134340286255,
435
+ "learning_rate": 1.3748958039517812e-08,
436
+ "loss": 0.04190609931945801,
437
+ "step": 6100
438
+ }
439
+ ],
440
+ "logging_steps": 100,
441
+ "max_steps": 6190,
442
+ "num_input_tokens_seen": 0,
443
+ "num_train_epochs": 2,
444
+ "save_steps": 500,
445
+ "stateful_callbacks": {
446
+ "TrainerControl": {
447
+ "args": {
448
+ "should_epoch_stop": false,
449
+ "should_evaluate": false,
450
+ "should_log": false,
451
+ "should_save": true,
452
+ "should_training_stop": true
453
+ },
454
+ "attributes": {}
455
+ }
456
+ },
457
+ "total_flos": 0.0,
458
+ "train_batch_size": 16,
459
+ "trial_name": null,
460
+ "trial_params": null
461
+ }
checkpoint-6190/training_args.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2181dd01650d66d3bf665d833ec71debd0d4333d3bb123add14eb6c07cff376f
3
+ size 4856
config.json ADDED
@@ -0,0 +1,51 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_num_labels": 5,
3
+ "add_cross_attention": false,
4
+ "architectures": [
5
+ "BertModel"
6
+ ],
7
+ "attention_probs_dropout_prob": 0.1,
8
+ "bos_token_id": null,
9
+ "classifier_dropout": null,
10
+ "directionality": "bidi",
11
+ "dtype": "float32",
12
+ "eos_token_id": null,
13
+ "finetuning_task": "sentiment-analysis",
14
+ "hidden_act": "gelu",
15
+ "hidden_dropout_prob": 0.1,
16
+ "hidden_size": 768,
17
+ "id2label": {
18
+ "0": "1 star",
19
+ "1": "2 stars",
20
+ "2": "3 stars",
21
+ "3": "4 stars",
22
+ "4": "5 stars"
23
+ },
24
+ "initializer_range": 0.02,
25
+ "intermediate_size": 3072,
26
+ "is_decoder": false,
27
+ "label2id": {
28
+ "1 star": 0,
29
+ "2 stars": 1,
30
+ "3 stars": 2,
31
+ "4 stars": 3,
32
+ "5 stars": 4
33
+ },
34
+ "layer_norm_eps": 1e-12,
35
+ "max_position_embeddings": 512,
36
+ "model_type": "bert",
37
+ "num_attention_heads": 12,
38
+ "num_hidden_layers": 12,
39
+ "output_past": true,
40
+ "pad_token_id": 0,
41
+ "pooler_fc_size": 768,
42
+ "pooler_num_attention_heads": 12,
43
+ "pooler_num_fc_layers": 3,
44
+ "pooler_size_per_head": 128,
45
+ "pooler_type": "first_token_transform",
46
+ "tie_word_embeddings": true,
47
+ "transformers_version": "5.1.0",
48
+ "type_vocab_size": 2,
49
+ "use_cache": true,
50
+ "vocab_size": 105879
51
+ }
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9725cc1b12322b0174e3cf6fead2ed7ee899deed3b3419ebe404932abf2dc4be
3
+ size 669448016
model_config.txt ADDED
@@ -0,0 +1,5 @@
 
 
 
 
 
 
1
+ model_type=regression
2
+ output_range=0-1
3
+ base_model=nlptown/bert-base-multilingual-uncased-sentiment
4
+ class_weighted=True
5
+ class_weights=0.621,0.827,0.883,1.226,4.312
modeling_calibrated.py ADDED
@@ -0,0 +1,184 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ Custom modeling file for calibrated sentiment prediction.
3
+ Auto-generated - do not edit manually.
4
+ """
5
+
6
+ import torch
7
+ import torch.nn as nn
8
+ from transformers import AutoModel, PreTrainedModel
9
+ from transformers.modeling_outputs import SequenceClassifierOutput
10
+ import json
11
+ import os
12
+ import numpy as np
13
+
14
+
15
+ class CalibratedRegressionModel(PreTrainedModel):
16
+ """
17
+ Sentiment model with built-in calibration.
18
+
19
+ Usage:
20
+ from transformers import AutoTokenizer
21
+ from modeling_calibrated import CalibratedRegressionModel
22
+
23
+ model = CalibratedRegressionModel.from_pretrained("your-username/model-name")
24
+ tokenizer = AutoTokenizer.from_pretrained("your-username/model-name")
25
+
26
+ # Single prediction
27
+ result = model.predict_sentiment("This is great!", tokenizer)
28
+ print(result) # {'score': 0.85, 'category': 'Very Positive'}
29
+ """
30
+
31
+ def __init__(self, config):
32
+ super().__init__(config)
33
+
34
+ # Load base transformer
35
+ self.base_model = AutoModel.from_config(config)
36
+
37
+ # Regression head
38
+ self.dropout = nn.Dropout(0.1)
39
+ self.regressor = nn.Linear(config.hidden_size, 1)
40
+
41
+ # Load calibration config
42
+ self.calibrator = None
43
+ self._load_calibrator()
44
+
45
+ def _load_calibrator(self):
46
+ """Load calibration configuration."""
47
+ calibrator_path = os.path.join(
48
+ os.path.dirname(__file__),
49
+ "calibrator_config.json"
50
+ )
51
+
52
+ if not os.path.exists(calibrator_path):
53
+ print("Warning: No calibrator found - using raw predictions")
54
+ return
55
+
56
+ try:
57
+ with open(calibrator_path, 'r') as f:
58
+ config = json.load(f)
59
+
60
+ self.calibrator = config
61
+ print(f"Loaded {config['method']} calibrator")
62
+
63
+ except Exception as e:
64
+ print(f"Warning: Could not load calibrator: {e}")
65
+ self.calibrator = None
66
+
67
+ def _calibrate_score(self, score):
68
+ """Apply calibration to a score."""
69
+ if self.calibrator is None:
70
+ return score
71
+
72
+ method = self.calibrator['method']
73
+
74
+ if method in ['isotonic', 'quantile_mapping']:
75
+ # Linear interpolation from mapping
76
+ mapping = self.calibrator['mapping']
77
+ x = np.array(mapping['input_scores'])
78
+ y = np.array(mapping['output_scores'])
79
+
80
+ # Simple linear interpolation
81
+ calibrated = np.interp(score, x, y)
82
+
83
+ elif method == 'piecewise':
84
+ # Apply correction from anchors
85
+ anchors = self.calibrator['anchors']
86
+ anchor_points = sorted([float(k) for k in anchors.keys()])
87
+ anchor_corrections = [anchors[str(p)] for p in anchor_points]
88
+
89
+ correction = np.interp(score, anchor_points, anchor_corrections)
90
+ calibrated = score + correction
91
+
92
+ else:
93
+ calibrated = score
94
+
95
+ return float(np.clip(calibrated, 0.0, 1.0))
96
+
97
+ def forward(self, input_ids, attention_mask=None, token_type_ids=None, labels=None):
98
+ """Forward pass with automatic calibration."""
99
+
100
+ # Get base model outputs
101
+ outputs = self.base_model(
102
+ input_ids=input_ids,
103
+ attention_mask=attention_mask,
104
+ token_type_ids=token_type_ids
105
+ )
106
+
107
+ pooled_output = outputs.pooler_output
108
+ pooled_output = self.dropout(pooled_output)
109
+ logits = self.regressor(pooled_output).squeeze(-1)
110
+
111
+ # Clip to valid range
112
+ logits = torch.clamp(logits, 0.0, 1.0)
113
+
114
+ # Apply calibration during inference (not training)
115
+ if not self.training and self.calibrator is not None:
116
+ # Calibrate each score in the batch
117
+ scores = logits.detach().cpu().numpy()
118
+ calibrated_scores = np.array([self._calibrate_score(s) for s in scores])
119
+ logits = torch.tensor(calibrated_scores, device=logits.device, dtype=logits.dtype)
120
+
121
+ # Calculate loss if labels provided
122
+ loss = None
123
+ if labels is not None:
124
+ loss_fn = nn.MSELoss()
125
+ loss = loss_fn(logits, labels)
126
+
127
+ return SequenceClassifierOutput(
128
+ loss=loss,
129
+ logits=logits,
130
+ hidden_states=outputs.hidden_states if hasattr(outputs, 'hidden_states') else None,
131
+ attentions=outputs.attentions if hasattr(outputs, 'attentions') else None,
132
+ )
133
+
134
+ @staticmethod
135
+ def score_to_category(score):
136
+ """Convert continuous score to category label."""
137
+ if score <= 0.20:
138
+ return "Very Negative"
139
+ elif score <= 0.40:
140
+ return "Negative"
141
+ elif score <= 0.60:
142
+ return "Neutral"
143
+ elif score <= 0.80:
144
+ return "Positive"
145
+ else:
146
+ return "Very Positive"
147
+
148
+ def predict_sentiment(self, text, tokenizer, device=None):
149
+ """
150
+ Predict sentiment for a single text (convenience method).
151
+
152
+ Args:
153
+ text: Input text string
154
+ tokenizer: Loaded tokenizer
155
+ device: Device to use (auto-detected if None)
156
+
157
+ Returns:
158
+ dict: {'score': float, 'category': str}
159
+ """
160
+ if device is None:
161
+ device = "cuda" if torch.cuda.is_available() else "cpu"
162
+
163
+ self.eval()
164
+ self.to(device)
165
+
166
+ # Tokenize
167
+ inputs = tokenizer(
168
+ text,
169
+ return_tensors="pt",
170
+ padding=True,
171
+ truncation=True,
172
+ max_length=512
173
+ )
174
+ inputs = {k: v.to(device) for k, v in inputs.items()}
175
+
176
+ # Predict
177
+ with torch.no_grad():
178
+ outputs = self(**inputs)
179
+ score = outputs.logits.item()
180
+
181
+ return {
182
+ 'score': score,
183
+ 'category': self.score_to_category(score)
184
+ }
regressor_head.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:bec00dafb98407f2c5ee552c3243d065db8bf618c19b782aee35c555d3fa7abf
3
+ size 4610
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
@@ -0,0 +1,15 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "backend": "tokenizers",
3
+ "cls_token": "[CLS]",
4
+ "do_lower_case": true,
5
+ "is_local": false,
6
+ "mask_token": "[MASK]",
7
+ "max_len": 512,
8
+ "model_max_length": 512,
9
+ "pad_token": "[PAD]",
10
+ "sep_token": "[SEP]",
11
+ "strip_accents": null,
12
+ "tokenize_chinese_chars": true,
13
+ "tokenizer_class": "BertTokenizer",
14
+ "unk_token": "[UNK]"
15
+ }
training_args.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e9fb8678aa4cc1bafcf0fc80034214048963228af3f0c28998f22b805d360a6a
3
+ size 4856