WolfPTL commited on
Commit
11235c3
·
verified ·
1 Parent(s): d981608

Initial upload - RoBERTa fine-tuned on r/Singapore

Browse files
.gitattributes CHANGED
@@ -33,3 +33,6 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ checkpoint-3095/tokenizer.json filter=lfs diff=lfs merge=lfs -text
37
+ checkpoint-6190/tokenizer.json filter=lfs diff=lfs merge=lfs -text
38
+ tokenizer.json filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,88 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language: en
3
+ tags:
4
+ - sentiment-analysis
5
+ - singapore
6
+ - singlish
7
+ - regression
8
+ license: mit
9
+ ---
10
+
11
+ # Singapore Sentiment Analyzer - ROBERTA (Calibrated)
12
+
13
+ Fine-tuned sentiment analysis model for Singapore social media, **with post-training calibration** for improved accuracy.
14
+
15
+ ## 🎯 Performance
16
+
17
+ | Metric | Before Calibration | After Calibration | Improvement |
18
+ |--------|-------------------|-------------------|-------------|
19
+ | **Accuracy** | 52.6% | **64.0%** | **+11.4%** |
20
+ | **MAE** | 0.126 | **0.104** | **-0.022** |
21
+ | **RMSE** | 0.168 | **0.141** | **-0.027** |
22
+
23
+ ## 📊 Sentiment Scale
24
+
25
+ | Score | Category |
26
+ |-------|----------|
27
+ | 0.00 - 0.20 | Very Negative |
28
+ | 0.21 - 0.40 | Negative |
29
+ | 0.41 - 0.60 | Neutral |
30
+ | 0.61 - 0.80 | Positive |
31
+ | 0.81 - 1.00 | Very Positive |
32
+
33
+ ## 🚀 Quick Start
34
+
35
+ ```python
36
+ from transformers import AutoTokenizer
37
+ from modeling_calibrated import CalibratedRegressionModel
38
+
39
+ # Load model (calibration is automatic!)
40
+ model_name = "your-username/roberta-singapore-sentiment"
41
+ tokenizer = AutoTokenizer.from_pretrained(model_name)
42
+ model = CalibratedRegressionModel.from_pretrained(model_name)
43
+
44
+ # Predict sentiment
45
+ text = "This chicken rice is damn shiok sia!"
46
+ result = model.predict_sentiment(text, tokenizer)
47
+
48
+ print(f"Score: {result['score']:.3f}") # 0.875
49
+ print(f"Category: {result['category']}") # "Very Positive"
50
+ ```
51
+
52
+ ## 💡 What is Calibration?
53
+
54
+ After fine-tuning, we applied **isotonic regression calibration** on a validation set. This corrects systematic bias patterns where the model was:
55
+ - Over-predicting on negative examples
56
+ - Under-predicting on positive examples
57
+ - Struggling with boundary cases (e.g., neutral vs negative)
58
+
59
+ The calibration layer is built into the model - you get calibrated predictions automatically!
60
+
61
+ ## 📚 Training Details
62
+
63
+ - **Base model**: `cardiffnlp/twitter-roberta-base-sentiment-latest`
64
+ - **Training data**: 49,521 Singapore Reddit posts/comments
65
+ - **Fine-tuning**: 5 epochs, MSE loss, learning rate 2e-5
66
+ - **Calibration**: Isotonic regression on 500-sample validation set
67
+
68
+ ## 🌏 Singapore Context
69
+
70
+ This model understands Singlish patterns and Singapore-specific terminology:
71
+ - Particles: lah, lor, leh, sia
72
+ - Slang: shiok, sian, jialat, paiseh
73
+ - Local context: HDB, MRT, hawker, kopitiam
74
+
75
+ ## 📝 Citation
76
+
77
+ ```bibtex
78
+ @misc{roberta-singapore-calibrated,
79
+ title = {Singapore Sentiment Analyzer - ROBERTA (Calibrated)},
80
+ year = {2026},
81
+ publisher = {HuggingFace},
82
+ url = {https://huggingface.co/your-username/roberta-singapore-sentiment}
83
+ }
84
+ ```
85
+
86
+ ## 📄 License
87
+
88
+ MIT License - Free for commercial and non-commercial use.
calibrator_config.json ADDED
@@ -0,0 +1,212 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "method": "isotonic",
3
+ "version": "1.0",
4
+ "mapping": {
5
+ "input_scores": [
6
+ 0.0,
7
+ 0.01,
8
+ 0.02,
9
+ 0.03,
10
+ 0.04,
11
+ 0.05,
12
+ 0.06,
13
+ 0.07,
14
+ 0.08,
15
+ 0.09,
16
+ 0.1,
17
+ 0.11,
18
+ 0.12,
19
+ 0.13,
20
+ 0.14,
21
+ 0.15,
22
+ 0.16,
23
+ 0.17,
24
+ 0.18,
25
+ 0.19,
26
+ 0.2,
27
+ 0.21,
28
+ 0.22,
29
+ 0.23,
30
+ 0.24,
31
+ 0.25,
32
+ 0.26,
33
+ 0.27,
34
+ 0.28,
35
+ 0.29,
36
+ 0.3,
37
+ 0.31,
38
+ 0.32,
39
+ 0.33,
40
+ 0.34,
41
+ 0.35000000000000003,
42
+ 0.36,
43
+ 0.37,
44
+ 0.38,
45
+ 0.39,
46
+ 0.4,
47
+ 0.41000000000000003,
48
+ 0.42,
49
+ 0.43,
50
+ 0.44,
51
+ 0.45,
52
+ 0.46,
53
+ 0.47000000000000003,
54
+ 0.48,
55
+ 0.49,
56
+ 0.5,
57
+ 0.51,
58
+ 0.52,
59
+ 0.53,
60
+ 0.54,
61
+ 0.55,
62
+ 0.56,
63
+ 0.5700000000000001,
64
+ 0.58,
65
+ 0.59,
66
+ 0.6,
67
+ 0.61,
68
+ 0.62,
69
+ 0.63,
70
+ 0.64,
71
+ 0.65,
72
+ 0.66,
73
+ 0.67,
74
+ 0.68,
75
+ 0.6900000000000001,
76
+ 0.7000000000000001,
77
+ 0.71,
78
+ 0.72,
79
+ 0.73,
80
+ 0.74,
81
+ 0.75,
82
+ 0.76,
83
+ 0.77,
84
+ 0.78,
85
+ 0.79,
86
+ 0.8,
87
+ 0.81,
88
+ 0.8200000000000001,
89
+ 0.8300000000000001,
90
+ 0.84,
91
+ 0.85,
92
+ 0.86,
93
+ 0.87,
94
+ 0.88,
95
+ 0.89,
96
+ 0.9,
97
+ 0.91,
98
+ 0.92,
99
+ 0.93,
100
+ 0.9400000000000001,
101
+ 0.9500000000000001,
102
+ 0.96,
103
+ 0.97,
104
+ 0.98,
105
+ 0.99,
106
+ 1.0
107
+ ],
108
+ "output_scores": [
109
+ 0.27,
110
+ 0.27,
111
+ 0.27,
112
+ 0.27,
113
+ 0.27,
114
+ 0.27,
115
+ 0.27,
116
+ 0.27,
117
+ 0.27,
118
+ 0.27,
119
+ 0.27,
120
+ 0.27,
121
+ 0.27,
122
+ 0.27,
123
+ 0.27,
124
+ 0.27,
125
+ 0.27,
126
+ 0.27,
127
+ 0.27,
128
+ 0.27,
129
+ 0.27,
130
+ 0.27,
131
+ 0.27,
132
+ 0.27,
133
+ 0.27,
134
+ 0.27,
135
+ 0.2992307692307692,
136
+ 0.2992307692307692,
137
+ 0.2992307692307692,
138
+ 0.3272,
139
+ 0.3272,
140
+ 0.3272,
141
+ 0.34397337082535834,
142
+ 0.3448571428571429,
143
+ 0.3448571428571429,
144
+ 0.37571428571428567,
145
+ 0.37571428571428567,
146
+ 0.40338709677419354,
147
+ 0.40338709677419354,
148
+ 0.40338709677419354,
149
+ 0.40713987827597964,
150
+ 0.43027027027027026,
151
+ 0.43256671918424794,
152
+ 0.44,
153
+ 0.44,
154
+ 0.44,
155
+ 0.44,
156
+ 0.44,
157
+ 0.45,
158
+ 0.451875,
159
+ 0.451875,
160
+ 0.451875,
161
+ 0.47468750000000004,
162
+ 0.47468750000000004,
163
+ 0.47468750000000004,
164
+ 0.47468750000000004,
165
+ 0.5066279069767443,
166
+ 0.5066279069767443,
167
+ 0.5066279069767443,
168
+ 0.5066279069767443,
169
+ 0.5066279069767443,
170
+ 0.5066279069767443,
171
+ 0.5066279069767443,
172
+ 0.5066279069767443,
173
+ 0.5066279069767443,
174
+ 0.5066279069767443,
175
+ 0.5066279069767443,
176
+ 0.5066279069767443,
177
+ 0.5066279069767443,
178
+ 0.5066279069767443,
179
+ 0.5311764705882354,
180
+ 0.5311764705882354,
181
+ 0.5311764705882354,
182
+ 0.5311764705882354,
183
+ 0.5311764705882354,
184
+ 0.5311764705882354,
185
+ 0.5311764705882354,
186
+ 0.5327950005287025,
187
+ 0.5691666666666667,
188
+ 0.5691666666666667,
189
+ 0.5691666666666667,
190
+ 0.5691666666666667,
191
+ 0.5691666666666667,
192
+ 0.5691666666666667,
193
+ 0.578,
194
+ 0.6255555555555554,
195
+ 0.6255555555555554,
196
+ 0.6255555555555554,
197
+ 0.6255555555555554,
198
+ 0.6255555555555554,
199
+ 0.6255555555555554,
200
+ 0.7080074947341977,
201
+ 1.0,
202
+ 1.0,
203
+ 1.0,
204
+ 1.0,
205
+ 1.0,
206
+ 1.0,
207
+ 1.0,
208
+ 1.0,
209
+ 1.0
210
+ ]
211
+ }
212
+ }
checkpoint-3095/model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2538f6728e4ce3b6913874b96bea3cbf01b35b69f36e2a0ff8afee61a43a6d5a
3
+ size 1112201908
checkpoint-3095/optimizer.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f679ee56b862ba9bd1a014dfc3a6c596475bbc4cf4843d4fd0e3761274bed760
3
+ size 2224523514
checkpoint-3095/rng_state.pth ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ec1d5b7b9bf8eb4b2f98128c10f9a0ee63e3904bb1961798817eb980b35e8d1e
3
+ size 14244
checkpoint-3095/scaler.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:adc90cbad3fd05852b7e150f6fcb9bb1e3a5e38c2220ea5f0e80632e44f0a3c6
3
+ size 988
checkpoint-3095/scheduler.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:973efcdf28ee46e111fafd3e448424d1ca0aa9d4a70bfd873711cc91b30c6295
3
+ size 1064
checkpoint-3095/tokenizer.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:eaea213cb90c14e73a4c3a9d7d3a4080cbaf4cd4ff1d82152a5d17abdf21f483
3
+ size 16781751
checkpoint-3095/tokenizer_config.json ADDED
@@ -0,0 +1,14 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "add_prefix_space": true,
3
+ "backend": "tokenizers",
4
+ "bos_token": "<s>",
5
+ "cls_token": "<s>",
6
+ "eos_token": "</s>",
7
+ "is_local": false,
8
+ "mask_token": "<mask>",
9
+ "model_max_length": 512,
10
+ "pad_token": "<pad>",
11
+ "sep_token": "</s>",
12
+ "tokenizer_class": "XLMRobertaTokenizer",
13
+ "unk_token": "<unk>"
14
+ }
checkpoint-3095/trainer_state.json ADDED
@@ -0,0 +1,244 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "best_global_step": null,
3
+ "best_metric": null,
4
+ "best_model_checkpoint": null,
5
+ "epoch": 1.0,
6
+ "eval_steps": 500,
7
+ "global_step": 3095,
8
+ "is_hyper_param_search": false,
9
+ "is_local_process_zero": true,
10
+ "is_world_process_zero": true,
11
+ "log_history": [
12
+ {
13
+ "epoch": 0.03231017770597738,
14
+ "grad_norm": 9.861390113830566,
15
+ "learning_rate": 3.101777059773829e-06,
16
+ "loss": 0.5477748489379883,
17
+ "step": 100
18
+ },
19
+ {
20
+ "epoch": 0.06462035541195477,
21
+ "grad_norm": 2.616037368774414,
22
+ "learning_rate": 6.332794830371568e-06,
23
+ "loss": 0.12676631927490234,
24
+ "step": 200
25
+ },
26
+ {
27
+ "epoch": 0.09693053311793215,
28
+ "grad_norm": 4.2857279777526855,
29
+ "learning_rate": 9.563812600969306e-06,
30
+ "loss": 0.09911694526672363,
31
+ "step": 300
32
+ },
33
+ {
34
+ "epoch": 0.12924071082390953,
35
+ "grad_norm": 1.2939870357513428,
36
+ "learning_rate": 1.2794830371567044e-05,
37
+ "loss": 0.1049954891204834,
38
+ "step": 400
39
+ },
40
+ {
41
+ "epoch": 0.16155088852988692,
42
+ "grad_norm": 5.219731330871582,
43
+ "learning_rate": 1.6025848142164783e-05,
44
+ "loss": 0.08385737419128418,
45
+ "step": 500
46
+ },
47
+ {
48
+ "epoch": 0.1938610662358643,
49
+ "grad_norm": 2.850756883621216,
50
+ "learning_rate": 1.925686591276252e-05,
51
+ "loss": 0.09317334175109863,
52
+ "step": 600
53
+ },
54
+ {
55
+ "epoch": 0.22617124394184168,
56
+ "grad_norm": 1.8276008367538452,
57
+ "learning_rate": 1.9990574234185796e-05,
58
+ "loss": 0.07786022186279297,
59
+ "step": 700
60
+ },
61
+ {
62
+ "epoch": 0.25848142164781907,
63
+ "grad_norm": 2.793462038040161,
64
+ "learning_rate": 1.995022750965336e-05,
65
+ "loss": 0.08245490074157714,
66
+ "step": 800
67
+ },
68
+ {
69
+ "epoch": 0.29079159935379645,
70
+ "grad_norm": 6.711742877960205,
71
+ "learning_rate": 1.9878246986426318e-05,
72
+ "loss": 0.08018719673156738,
73
+ "step": 900
74
+ },
75
+ {
76
+ "epoch": 0.32310177705977383,
77
+ "grad_norm": 3.199460983276367,
78
+ "learning_rate": 1.9774861505240173e-05,
79
+ "loss": 0.0739774227142334,
80
+ "step": 1000
81
+ },
82
+ {
83
+ "epoch": 0.3554119547657512,
84
+ "grad_norm": 2.4899613857269287,
85
+ "learning_rate": 1.964039974958449e-05,
86
+ "loss": 0.08382820129394532,
87
+ "step": 1100
88
+ },
89
+ {
90
+ "epoch": 0.3877221324717286,
91
+ "grad_norm": 4.4406304359436035,
92
+ "learning_rate": 1.9475289200751162e-05,
93
+ "loss": 0.07487330436706544,
94
+ "step": 1200
95
+ },
96
+ {
97
+ "epoch": 0.420032310177706,
98
+ "grad_norm": 1.0857970714569092,
99
+ "learning_rate": 1.928005477878439e-05,
100
+ "loss": 0.07685728549957276,
101
+ "step": 1300
102
+ },
103
+ {
104
+ "epoch": 0.45234248788368336,
105
+ "grad_norm": 1.1819262504577637,
106
+ "learning_rate": 1.9055317173653e-05,
107
+ "loss": 0.07501106262207032,
108
+ "step": 1400
109
+ },
110
+ {
111
+ "epoch": 0.48465266558966075,
112
+ "grad_norm": 3.129441976547241,
113
+ "learning_rate": 1.880179087195068e-05,
114
+ "loss": 0.07474668979644776,
115
+ "step": 1500
116
+ },
117
+ {
118
+ "epoch": 0.5169628432956381,
119
+ "grad_norm": 2.9378674030303955,
120
+ "learning_rate": 1.8520281885397672e-05,
121
+ "loss": 0.07247552871704102,
122
+ "step": 1600
123
+ },
124
+ {
125
+ "epoch": 0.5492730210016155,
126
+ "grad_norm": 1.560481309890747,
127
+ "learning_rate": 1.821168518836544e-05,
128
+ "loss": 0.07358447074890137,
129
+ "step": 1700
130
+ },
131
+ {
132
+ "epoch": 0.5815831987075929,
133
+ "grad_norm": 4.726365089416504,
134
+ "learning_rate": 1.787698187257095e-05,
135
+ "loss": 0.07952657222747803,
136
+ "step": 1800
137
+ },
138
+ {
139
+ "epoch": 0.6138933764135702,
140
+ "grad_norm": 1.4165911674499512,
141
+ "learning_rate": 1.7517236027986427e-05,
142
+ "loss": 0.07016692161560059,
143
+ "step": 1900
144
+ },
145
+ {
146
+ "epoch": 0.6462035541195477,
147
+ "grad_norm": 0.4783337712287903,
148
+ "learning_rate": 1.7133591359880684e-05,
149
+ "loss": 0.07494973182678223,
150
+ "step": 2000
151
+ },
152
+ {
153
+ "epoch": 0.678513731825525,
154
+ "grad_norm": 2.530547618865967,
155
+ "learning_rate": 1.6727267552747313e-05,
156
+ "loss": 0.07734737873077392,
157
+ "step": 2100
158
+ },
159
+ {
160
+ "epoch": 0.7108239095315024,
161
+ "grad_norm": 1.0889544486999512,
162
+ "learning_rate": 1.6299556392679357e-05,
163
+ "loss": 0.0775426435470581,
164
+ "step": 2200
165
+ },
166
+ {
167
+ "epoch": 0.7431340872374798,
168
+ "grad_norm": 2.1081032752990723,
169
+ "learning_rate": 1.5851817660518402e-05,
170
+ "loss": 0.06733710289001466,
171
+ "step": 2300
172
+ },
173
+ {
174
+ "epoch": 0.7754442649434572,
175
+ "grad_norm": 1.5836951732635498,
176
+ "learning_rate": 1.5385474808834478e-05,
177
+ "loss": 0.08194868087768555,
178
+ "step": 2400
179
+ },
180
+ {
181
+ "epoch": 0.8077544426494345,
182
+ "grad_norm": 2.057037353515625,
183
+ "learning_rate": 1.4902010436480573e-05,
184
+ "loss": 0.06820503234863282,
185
+ "step": 2500
186
+ },
187
+ {
188
+ "epoch": 0.840064620355412,
189
+ "grad_norm": 0.6506631374359131,
190
+ "learning_rate": 1.4402961575109102e-05,
191
+ "loss": 0.07051002025604249,
192
+ "step": 2600
193
+ },
194
+ {
195
+ "epoch": 0.8723747980613893,
196
+ "grad_norm": 4.292245388031006,
197
+ "learning_rate": 1.388991480263541e-05,
198
+ "loss": 0.07151034355163574,
199
+ "step": 2700
200
+ },
201
+ {
202
+ "epoch": 0.9046849757673667,
203
+ "grad_norm": 1.9793055057525635,
204
+ "learning_rate": 1.336450119918359e-05,
205
+ "loss": 0.0655900478363037,
206
+ "step": 2800
207
+ },
208
+ {
209
+ "epoch": 0.9369951534733441,
210
+ "grad_norm": 2.530876398086548,
211
+ "learning_rate": 1.2828391161550802e-05,
212
+ "loss": 0.07584657192230225,
213
+ "step": 2900
214
+ },
215
+ {
216
+ "epoch": 0.9693053311793215,
217
+ "grad_norm": 2.5297999382019043,
218
+ "learning_rate": 1.2283289092675784e-05,
219
+ "loss": 0.07008735656738281,
220
+ "step": 3000
221
+ }
222
+ ],
223
+ "logging_steps": 100,
224
+ "max_steps": 6190,
225
+ "num_input_tokens_seen": 0,
226
+ "num_train_epochs": 2,
227
+ "save_steps": 500,
228
+ "stateful_callbacks": {
229
+ "TrainerControl": {
230
+ "args": {
231
+ "should_epoch_stop": false,
232
+ "should_evaluate": false,
233
+ "should_log": false,
234
+ "should_save": true,
235
+ "should_training_stop": false
236
+ },
237
+ "attributes": {}
238
+ }
239
+ },
240
+ "total_flos": 0.0,
241
+ "train_batch_size": 16,
242
+ "trial_name": null,
243
+ "trial_params": null
244
+ }
checkpoint-3095/training_args.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:85ca94fddce2114b27b9b9bdfc249806dcd2ebc421af3e27a32222368e1077c7
3
+ size 4792
checkpoint-6190/model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:87cd730ae4c38a947868a6481864ac60e3b38580246a584547a9baff198bfc90
3
+ size 1112201908
checkpoint-6190/optimizer.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d7bce45e5c04298ae7ddd5934d82f3d57e3edf7b338e1d5bf76d2c5faa6cb66f
3
+ size 2224523514
checkpoint-6190/rng_state.pth ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:28ce47c620300bd9f2ffa8d5f2e1bae45452602cfdd4d5fc78dfcc639890e3ae
3
+ size 14244
checkpoint-6190/scaler.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e28f05c7fe964d51cc0c14d3b235eda98be733ad1777b7507ff657cfadf5b07c
3
+ size 988
checkpoint-6190/scheduler.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8e5af96d214ce66ef00dfede4f395ac95de66fe120d30f632a424a2f80db894a
3
+ size 1064
checkpoint-6190/tokenizer.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:eaea213cb90c14e73a4c3a9d7d3a4080cbaf4cd4ff1d82152a5d17abdf21f483
3
+ size 16781751
checkpoint-6190/tokenizer_config.json ADDED
@@ -0,0 +1,14 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "add_prefix_space": true,
3
+ "backend": "tokenizers",
4
+ "bos_token": "<s>",
5
+ "cls_token": "<s>",
6
+ "eos_token": "</s>",
7
+ "is_local": false,
8
+ "mask_token": "<mask>",
9
+ "model_max_length": 512,
10
+ "pad_token": "<pad>",
11
+ "sep_token": "</s>",
12
+ "tokenizer_class": "XLMRobertaTokenizer",
13
+ "unk_token": "<unk>"
14
+ }
checkpoint-6190/trainer_state.json ADDED
@@ -0,0 +1,461 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "best_global_step": null,
3
+ "best_metric": null,
4
+ "best_model_checkpoint": null,
5
+ "epoch": 2.0,
6
+ "eval_steps": 500,
7
+ "global_step": 6190,
8
+ "is_hyper_param_search": false,
9
+ "is_local_process_zero": true,
10
+ "is_world_process_zero": true,
11
+ "log_history": [
12
+ {
13
+ "epoch": 0.03231017770597738,
14
+ "grad_norm": 9.861390113830566,
15
+ "learning_rate": 3.101777059773829e-06,
16
+ "loss": 0.5477748489379883,
17
+ "step": 100
18
+ },
19
+ {
20
+ "epoch": 0.06462035541195477,
21
+ "grad_norm": 2.616037368774414,
22
+ "learning_rate": 6.332794830371568e-06,
23
+ "loss": 0.12676631927490234,
24
+ "step": 200
25
+ },
26
+ {
27
+ "epoch": 0.09693053311793215,
28
+ "grad_norm": 4.2857279777526855,
29
+ "learning_rate": 9.563812600969306e-06,
30
+ "loss": 0.09911694526672363,
31
+ "step": 300
32
+ },
33
+ {
34
+ "epoch": 0.12924071082390953,
35
+ "grad_norm": 1.2939870357513428,
36
+ "learning_rate": 1.2794830371567044e-05,
37
+ "loss": 0.1049954891204834,
38
+ "step": 400
39
+ },
40
+ {
41
+ "epoch": 0.16155088852988692,
42
+ "grad_norm": 5.219731330871582,
43
+ "learning_rate": 1.6025848142164783e-05,
44
+ "loss": 0.08385737419128418,
45
+ "step": 500
46
+ },
47
+ {
48
+ "epoch": 0.1938610662358643,
49
+ "grad_norm": 2.850756883621216,
50
+ "learning_rate": 1.925686591276252e-05,
51
+ "loss": 0.09317334175109863,
52
+ "step": 600
53
+ },
54
+ {
55
+ "epoch": 0.22617124394184168,
56
+ "grad_norm": 1.8276008367538452,
57
+ "learning_rate": 1.9990574234185796e-05,
58
+ "loss": 0.07786022186279297,
59
+ "step": 700
60
+ },
61
+ {
62
+ "epoch": 0.25848142164781907,
63
+ "grad_norm": 2.793462038040161,
64
+ "learning_rate": 1.995022750965336e-05,
65
+ "loss": 0.08245490074157714,
66
+ "step": 800
67
+ },
68
+ {
69
+ "epoch": 0.29079159935379645,
70
+ "grad_norm": 6.711742877960205,
71
+ "learning_rate": 1.9878246986426318e-05,
72
+ "loss": 0.08018719673156738,
73
+ "step": 900
74
+ },
75
+ {
76
+ "epoch": 0.32310177705977383,
77
+ "grad_norm": 3.199460983276367,
78
+ "learning_rate": 1.9774861505240173e-05,
79
+ "loss": 0.0739774227142334,
80
+ "step": 1000
81
+ },
82
+ {
83
+ "epoch": 0.3554119547657512,
84
+ "grad_norm": 2.4899613857269287,
85
+ "learning_rate": 1.964039974958449e-05,
86
+ "loss": 0.08382820129394532,
87
+ "step": 1100
88
+ },
89
+ {
90
+ "epoch": 0.3877221324717286,
91
+ "grad_norm": 4.4406304359436035,
92
+ "learning_rate": 1.9475289200751162e-05,
93
+ "loss": 0.07487330436706544,
94
+ "step": 1200
95
+ },
96
+ {
97
+ "epoch": 0.420032310177706,
98
+ "grad_norm": 1.0857970714569092,
99
+ "learning_rate": 1.928005477878439e-05,
100
+ "loss": 0.07685728549957276,
101
+ "step": 1300
102
+ },
103
+ {
104
+ "epoch": 0.45234248788368336,
105
+ "grad_norm": 1.1819262504577637,
106
+ "learning_rate": 1.9055317173653e-05,
107
+ "loss": 0.07501106262207032,
108
+ "step": 1400
109
+ },
110
+ {
111
+ "epoch": 0.48465266558966075,
112
+ "grad_norm": 3.129441976547241,
113
+ "learning_rate": 1.880179087195068e-05,
114
+ "loss": 0.07474668979644776,
115
+ "step": 1500
116
+ },
117
+ {
118
+ "epoch": 0.5169628432956381,
119
+ "grad_norm": 2.9378674030303955,
120
+ "learning_rate": 1.8520281885397672e-05,
121
+ "loss": 0.07247552871704102,
122
+ "step": 1600
123
+ },
124
+ {
125
+ "epoch": 0.5492730210016155,
126
+ "grad_norm": 1.560481309890747,
127
+ "learning_rate": 1.821168518836544e-05,
128
+ "loss": 0.07358447074890137,
129
+ "step": 1700
130
+ },
131
+ {
132
+ "epoch": 0.5815831987075929,
133
+ "grad_norm": 4.726365089416504,
134
+ "learning_rate": 1.787698187257095e-05,
135
+ "loss": 0.07952657222747803,
136
+ "step": 1800
137
+ },
138
+ {
139
+ "epoch": 0.6138933764135702,
140
+ "grad_norm": 1.4165911674499512,
141
+ "learning_rate": 1.7517236027986427e-05,
142
+ "loss": 0.07016692161560059,
143
+ "step": 1900
144
+ },
145
+ {
146
+ "epoch": 0.6462035541195477,
147
+ "grad_norm": 0.4783337712287903,
148
+ "learning_rate": 1.7133591359880684e-05,
149
+ "loss": 0.07494973182678223,
150
+ "step": 2000
151
+ },
152
+ {
153
+ "epoch": 0.678513731825525,
154
+ "grad_norm": 2.530547618865967,
155
+ "learning_rate": 1.6727267552747313e-05,
156
+ "loss": 0.07734737873077392,
157
+ "step": 2100
158
+ },
159
+ {
160
+ "epoch": 0.7108239095315024,
161
+ "grad_norm": 1.0889544486999512,
162
+ "learning_rate": 1.6299556392679357e-05,
163
+ "loss": 0.0775426435470581,
164
+ "step": 2200
165
+ },
166
+ {
167
+ "epoch": 0.7431340872374798,
168
+ "grad_norm": 2.1081032752990723,
169
+ "learning_rate": 1.5851817660518402e-05,
170
+ "loss": 0.06733710289001466,
171
+ "step": 2300
172
+ },
173
+ {
174
+ "epoch": 0.7754442649434572,
175
+ "grad_norm": 1.5836951732635498,
176
+ "learning_rate": 1.5385474808834478e-05,
177
+ "loss": 0.08194868087768555,
178
+ "step": 2400
179
+ },
180
+ {
181
+ "epoch": 0.8077544426494345,
182
+ "grad_norm": 2.057037353515625,
183
+ "learning_rate": 1.4902010436480573e-05,
184
+ "loss": 0.06820503234863282,
185
+ "step": 2500
186
+ },
187
+ {
188
+ "epoch": 0.840064620355412,
189
+ "grad_norm": 0.6506631374359131,
190
+ "learning_rate": 1.4402961575109102e-05,
191
+ "loss": 0.07051002025604249,
192
+ "step": 2600
193
+ },
194
+ {
195
+ "epoch": 0.8723747980613893,
196
+ "grad_norm": 4.292245388031006,
197
+ "learning_rate": 1.388991480263541e-05,
198
+ "loss": 0.07151034355163574,
199
+ "step": 2700
200
+ },
201
+ {
202
+ "epoch": 0.9046849757673667,
203
+ "grad_norm": 1.9793055057525635,
204
+ "learning_rate": 1.336450119918359e-05,
205
+ "loss": 0.0655900478363037,
206
+ "step": 2800
207
+ },
208
+ {
209
+ "epoch": 0.9369951534733441,
210
+ "grad_norm": 2.530876398086548,
211
+ "learning_rate": 1.2828391161550802e-05,
212
+ "loss": 0.07584657192230225,
213
+ "step": 2900
214
+ },
215
+ {
216
+ "epoch": 0.9693053311793215,
217
+ "grad_norm": 2.5297999382019043,
218
+ "learning_rate": 1.2283289092675784e-05,
219
+ "loss": 0.07008735656738281,
220
+ "step": 3000
221
+ },
222
+ {
223
+ "epoch": 1.001615508885299,
224
+ "grad_norm": 1.7796375751495361,
225
+ "learning_rate": 1.1730927982994993e-05,
226
+ "loss": 0.06636083602905274,
227
+ "step": 3100
228
+ },
229
+ {
230
+ "epoch": 1.0339256865912763,
231
+ "grad_norm": 1.7367686033248901,
232
+ "learning_rate": 1.1173063900913238e-05,
233
+ "loss": 0.0592303991317749,
234
+ "step": 3200
235
+ },
236
+ {
237
+ "epoch": 1.0662358642972536,
238
+ "grad_norm": 4.276639461517334,
239
+ "learning_rate": 1.0611470409904767e-05,
240
+ "loss": 0.06370447158813476,
241
+ "step": 3300
242
+ },
243
+ {
244
+ "epoch": 1.098546042003231,
245
+ "grad_norm": 0.847137451171875,
246
+ "learning_rate": 1.004793292999394e-05,
247
+ "loss": 0.05840009212493896,
248
+ "step": 3400
249
+ },
250
+ {
251
+ "epoch": 1.1308562197092085,
252
+ "grad_norm": 0.6478699445724487,
253
+ "learning_rate": 9.484243061541569e-06,
254
+ "loss": 0.06211245059967041,
255
+ "step": 3500
256
+ },
257
+ {
258
+ "epoch": 1.1631663974151858,
259
+ "grad_norm": 1.4824755191802979,
260
+ "learning_rate": 8.922192889382645e-06,
261
+ "loss": 0.055956668853759765,
262
+ "step": 3600
263
+ },
264
+ {
265
+ "epoch": 1.1954765751211631,
266
+ "grad_norm": 2.773651599884033,
267
+ "learning_rate": 8.363569285423877e-06,
268
+ "loss": 0.06431435108184815,
269
+ "step": 3700
270
+ },
271
+ {
272
+ "epoch": 1.2277867528271407,
273
+ "grad_norm": 2.68066668510437,
274
+ "learning_rate": 7.810148227814154e-06,
275
+ "loss": 0.05840532779693604,
276
+ "step": 3800
277
+ },
278
+ {
279
+ "epoch": 1.260096930533118,
280
+ "grad_norm": 1.5976542234420776,
281
+ "learning_rate": 7.2691135581136115e-06,
282
+ "loss": 0.05926973342895508,
283
+ "step": 3900
284
+ },
285
+ {
286
+ "epoch": 1.2924071082390953,
287
+ "grad_norm": 1.544439673423767,
288
+ "learning_rate": 6.731258267944037e-06,
289
+ "loss": 0.05274605751037598,
290
+ "step": 4000
291
+ },
292
+ {
293
+ "epoch": 1.3247172859450727,
294
+ "grad_norm": 2.24636173248291,
295
+ "learning_rate": 6.203794973116895e-06,
296
+ "loss": 0.059646468162536624,
297
+ "step": 4100
298
+ },
299
+ {
300
+ "epoch": 1.35702746365105,
301
+ "grad_norm": 3.2433087825775146,
302
+ "learning_rate": 5.688400586815448e-06,
303
+ "loss": 0.06289317131042481,
304
+ "step": 4200
305
+ },
306
+ {
307
+ "epoch": 1.3893376413570275,
308
+ "grad_norm": 1.3600902557373047,
309
+ "learning_rate": 5.186713652706039e-06,
310
+ "loss": 0.052667765617370604,
311
+ "step": 4300
312
+ },
313
+ {
314
+ "epoch": 1.4216478190630049,
315
+ "grad_norm": 1.0024421215057373,
316
+ "learning_rate": 4.700329135674232e-06,
317
+ "loss": 0.06143908977508545,
318
+ "step": 4400
319
+ },
320
+ {
321
+ "epoch": 1.4539579967689822,
322
+ "grad_norm": 1.5457347631454468,
323
+ "learning_rate": 4.230793351106791e-06,
324
+ "loss": 0.05797869205474854,
325
+ "step": 4500
326
+ },
327
+ {
328
+ "epoch": 1.4862681744749597,
329
+ "grad_norm": 4.037192344665527,
330
+ "learning_rate": 3.779599048840291e-06,
331
+ "loss": 0.06049912929534912,
332
+ "step": 4600
333
+ },
334
+ {
335
+ "epoch": 1.518578352180937,
336
+ "grad_norm": 1.502553939819336,
337
+ "learning_rate": 3.348180667405534e-06,
338
+ "loss": 0.060120220184326174,
339
+ "step": 4700
340
+ },
341
+ {
342
+ "epoch": 1.5508885298869144,
343
+ "grad_norm": 1.232434868812561,
344
+ "learning_rate": 2.937909773655454e-06,
345
+ "loss": 0.05791036605834961,
346
+ "step": 4800
347
+ },
348
+ {
349
+ "epoch": 1.5831987075928917,
350
+ "grad_norm": 1.6803334951400757,
351
+ "learning_rate": 2.5500907022749176e-06,
352
+ "loss": 0.06459022045135498,
353
+ "step": 4900
354
+ },
355
+ {
356
+ "epoch": 1.615508885298869,
357
+ "grad_norm": 2.0008294582366943,
358
+ "learning_rate": 2.185956409035248e-06,
359
+ "loss": 0.05583086967468262,
360
+ "step": 5000
361
+ },
362
+ {
363
+ "epoch": 1.6478190630048464,
364
+ "grad_norm": 1.3095674514770508,
365
+ "learning_rate": 1.8466645509768467e-06,
366
+ "loss": 0.054550580978393555,
367
+ "step": 5100
368
+ },
369
+ {
370
+ "epoch": 1.680129240710824,
371
+ "grad_norm": 1.9832102060317993,
372
+ "learning_rate": 1.5332938059818138e-06,
373
+ "loss": 0.059768757820129394,
374
+ "step": 5200
375
+ },
376
+ {
377
+ "epoch": 1.7124394184168013,
378
+ "grad_norm": 1.8038883209228516,
379
+ "learning_rate": 1.2468404434373893e-06,
380
+ "loss": 0.0636633014678955,
381
+ "step": 5300
382
+ },
383
+ {
384
+ "epoch": 1.7447495961227788,
385
+ "grad_norm": 0.849116325378418,
386
+ "learning_rate": 9.882151568927734e-07,
387
+ "loss": 0.0593804407119751,
388
+ "step": 5400
389
+ },
390
+ {
391
+ "epoch": 1.7770597738287561,
392
+ "grad_norm": 7.403454780578613,
393
+ "learning_rate": 7.582401687789576e-07,
394
+ "loss": 0.055037131309509275,
395
+ "step": 5500
396
+ },
397
+ {
398
+ "epoch": 1.8093699515347335,
399
+ "grad_norm": 1.4914259910583496,
400
+ "learning_rate": 5.576466163962424e-07,
401
+ "loss": 0.0574748420715332,
402
+ "step": 5600
403
+ },
404
+ {
405
+ "epoch": 1.8416801292407108,
406
+ "grad_norm": 4.520964622497559,
407
+ "learning_rate": 3.870722274799332e-07,
408
+ "loss": 0.05434022903442383,
409
+ "step": 5700
410
+ },
411
+ {
412
+ "epoch": 1.8739903069466881,
413
+ "grad_norm": 4.723050117492676,
414
+ "learning_rate": 2.470592927340576e-07,
415
+ "loss": 0.0561255693435669,
416
+ "step": 5800
417
+ },
418
+ {
419
+ "epoch": 1.9063004846526654,
420
+ "grad_norm": 1.1265549659729004,
421
+ "learning_rate": 1.3805294177882567e-07,
422
+ "loss": 0.053927097320556644,
423
+ "step": 5900
424
+ },
425
+ {
426
+ "epoch": 1.938610662358643,
427
+ "grad_norm": 0.8897490501403809,
428
+ "learning_rate": 6.039972799296246e-08,
429
+ "loss": 0.05735964298248291,
430
+ "step": 6000
431
+ },
432
+ {
433
+ "epoch": 1.9709208400646203,
434
+ "grad_norm": 2.7680275440216064,
435
+ "learning_rate": 1.4346526749972056e-08,
436
+ "loss": 0.05199080467224121,
437
+ "step": 6100
438
+ }
439
+ ],
440
+ "logging_steps": 100,
441
+ "max_steps": 6190,
442
+ "num_input_tokens_seen": 0,
443
+ "num_train_epochs": 2,
444
+ "save_steps": 500,
445
+ "stateful_callbacks": {
446
+ "TrainerControl": {
447
+ "args": {
448
+ "should_epoch_stop": false,
449
+ "should_evaluate": false,
450
+ "should_log": false,
451
+ "should_save": true,
452
+ "should_training_stop": true
453
+ },
454
+ "attributes": {}
455
+ }
456
+ },
457
+ "total_flos": 0.0,
458
+ "train_batch_size": 16,
459
+ "trial_name": null,
460
+ "trial_params": null
461
+ }
checkpoint-6190/training_args.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:85ca94fddce2114b27b9b9bdfc249806dcd2ebc421af3e27a32222368e1077c7
3
+ size 4792
config.json ADDED
@@ -0,0 +1,30 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "add_cross_attention": false,
3
+ "architectures": [
4
+ "XLMRobertaModel"
5
+ ],
6
+ "attention_probs_dropout_prob": 0.1,
7
+ "bos_token_id": 0,
8
+ "classifier_dropout": null,
9
+ "dtype": "float32",
10
+ "eos_token_id": 2,
11
+ "hidden_act": "gelu",
12
+ "hidden_dropout_prob": 0.1,
13
+ "hidden_size": 768,
14
+ "initializer_range": 0.02,
15
+ "intermediate_size": 3072,
16
+ "is_decoder": false,
17
+ "layer_norm_eps": 1e-05,
18
+ "max_position_embeddings": 514,
19
+ "model_type": "xlm-roberta",
20
+ "num_attention_heads": 12,
21
+ "num_hidden_layers": 12,
22
+ "output_past": true,
23
+ "pad_token_id": 1,
24
+ "position_embedding_type": "absolute",
25
+ "tie_word_embeddings": true,
26
+ "transformers_version": "5.1.0",
27
+ "type_vocab_size": 1,
28
+ "use_cache": true,
29
+ "vocab_size": 250002
30
+ }
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c52de2bbeecd89b72b0f6efa87d44ca44e235ef448b78509cf18a48c73eac4e4
3
+ size 1112197064
model_config.txt ADDED
@@ -0,0 +1,5 @@
 
 
 
 
 
 
1
+ model_type=regression
2
+ output_range=0-1
3
+ base_model=xlm-roberta-base
4
+ class_weighted=True
5
+ class_weights=0.621,0.827,0.883,1.226,4.312
modeling_calibrated.py ADDED
@@ -0,0 +1,184 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ Custom modeling file for calibrated sentiment prediction.
3
+ Auto-generated - do not edit manually.
4
+ """
5
+
6
+ import torch
7
+ import torch.nn as nn
8
+ from transformers import AutoModel, PreTrainedModel
9
+ from transformers.modeling_outputs import SequenceClassifierOutput
10
+ import json
11
+ import os
12
+ import numpy as np
13
+
14
+
15
+ class CalibratedRegressionModel(PreTrainedModel):
16
+ """
17
+ Sentiment model with built-in calibration.
18
+
19
+ Usage:
20
+ from transformers import AutoTokenizer
21
+ from modeling_calibrated import CalibratedRegressionModel
22
+
23
+ model = CalibratedRegressionModel.from_pretrained("your-username/model-name")
24
+ tokenizer = AutoTokenizer.from_pretrained("your-username/model-name")
25
+
26
+ # Single prediction
27
+ result = model.predict_sentiment("This is great!", tokenizer)
28
+ print(result) # {'score': 0.85, 'category': 'Very Positive'}
29
+ """
30
+
31
+ def __init__(self, config):
32
+ super().__init__(config)
33
+
34
+ # Load base transformer
35
+ self.base_model = AutoModel.from_config(config)
36
+
37
+ # Regression head
38
+ self.dropout = nn.Dropout(0.1)
39
+ self.regressor = nn.Linear(config.hidden_size, 1)
40
+
41
+ # Load calibration config
42
+ self.calibrator = None
43
+ self._load_calibrator()
44
+
45
+ def _load_calibrator(self):
46
+ """Load calibration configuration."""
47
+ calibrator_path = os.path.join(
48
+ os.path.dirname(__file__),
49
+ "calibrator_config.json"
50
+ )
51
+
52
+ if not os.path.exists(calibrator_path):
53
+ print("Warning: No calibrator found - using raw predictions")
54
+ return
55
+
56
+ try:
57
+ with open(calibrator_path, 'r') as f:
58
+ config = json.load(f)
59
+
60
+ self.calibrator = config
61
+ print(f"Loaded {config['method']} calibrator")
62
+
63
+ except Exception as e:
64
+ print(f"Warning: Could not load calibrator: {e}")
65
+ self.calibrator = None
66
+
67
+ def _calibrate_score(self, score):
68
+ """Apply calibration to a score."""
69
+ if self.calibrator is None:
70
+ return score
71
+
72
+ method = self.calibrator['method']
73
+
74
+ if method in ['isotonic', 'quantile_mapping']:
75
+ # Linear interpolation from mapping
76
+ mapping = self.calibrator['mapping']
77
+ x = np.array(mapping['input_scores'])
78
+ y = np.array(mapping['output_scores'])
79
+
80
+ # Simple linear interpolation
81
+ calibrated = np.interp(score, x, y)
82
+
83
+ elif method == 'piecewise':
84
+ # Apply correction from anchors
85
+ anchors = self.calibrator['anchors']
86
+ anchor_points = sorted([float(k) for k in anchors.keys()])
87
+ anchor_corrections = [anchors[str(p)] for p in anchor_points]
88
+
89
+ correction = np.interp(score, anchor_points, anchor_corrections)
90
+ calibrated = score + correction
91
+
92
+ else:
93
+ calibrated = score
94
+
95
+ return float(np.clip(calibrated, 0.0, 1.0))
96
+
97
+ def forward(self, input_ids, attention_mask=None, token_type_ids=None, labels=None):
98
+ """Forward pass with automatic calibration."""
99
+
100
+ # Get base model outputs
101
+ outputs = self.base_model(
102
+ input_ids=input_ids,
103
+ attention_mask=attention_mask,
104
+ token_type_ids=token_type_ids
105
+ )
106
+
107
+ pooled_output = outputs.pooler_output
108
+ pooled_output = self.dropout(pooled_output)
109
+ logits = self.regressor(pooled_output).squeeze(-1)
110
+
111
+ # Clip to valid range
112
+ logits = torch.clamp(logits, 0.0, 1.0)
113
+
114
+ # Apply calibration during inference (not training)
115
+ if not self.training and self.calibrator is not None:
116
+ # Calibrate each score in the batch
117
+ scores = logits.detach().cpu().numpy()
118
+ calibrated_scores = np.array([self._calibrate_score(s) for s in scores])
119
+ logits = torch.tensor(calibrated_scores, device=logits.device, dtype=logits.dtype)
120
+
121
+ # Calculate loss if labels provided
122
+ loss = None
123
+ if labels is not None:
124
+ loss_fn = nn.MSELoss()
125
+ loss = loss_fn(logits, labels)
126
+
127
+ return SequenceClassifierOutput(
128
+ loss=loss,
129
+ logits=logits,
130
+ hidden_states=outputs.hidden_states if hasattr(outputs, 'hidden_states') else None,
131
+ attentions=outputs.attentions if hasattr(outputs, 'attentions') else None,
132
+ )
133
+
134
+ @staticmethod
135
+ def score_to_category(score):
136
+ """Convert continuous score to category label."""
137
+ if score <= 0.20:
138
+ return "Very Negative"
139
+ elif score <= 0.40:
140
+ return "Negative"
141
+ elif score <= 0.60:
142
+ return "Neutral"
143
+ elif score <= 0.80:
144
+ return "Positive"
145
+ else:
146
+ return "Very Positive"
147
+
148
+ def predict_sentiment(self, text, tokenizer, device=None):
149
+ """
150
+ Predict sentiment for a single text (convenience method).
151
+
152
+ Args:
153
+ text: Input text string
154
+ tokenizer: Loaded tokenizer
155
+ device: Device to use (auto-detected if None)
156
+
157
+ Returns:
158
+ dict: {'score': float, 'category': str}
159
+ """
160
+ if device is None:
161
+ device = "cuda" if torch.cuda.is_available() else "cpu"
162
+
163
+ self.eval()
164
+ self.to(device)
165
+
166
+ # Tokenize
167
+ inputs = tokenizer(
168
+ text,
169
+ return_tensors="pt",
170
+ padding=True,
171
+ truncation=True,
172
+ max_length=512
173
+ )
174
+ inputs = {k: v.to(device) for k, v in inputs.items()}
175
+
176
+ # Predict
177
+ with torch.no_grad():
178
+ outputs = self(**inputs)
179
+ score = outputs.logits.item()
180
+
181
+ return {
182
+ 'score': score,
183
+ 'category': self.score_to_category(score)
184
+ }
regressor_head.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a1b74b41139df6fd69310879998ac9c14c73d69452680ada34b1c0abe031e61b
3
+ size 4610
tokenizer.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:eaea213cb90c14e73a4c3a9d7d3a4080cbaf4cd4ff1d82152a5d17abdf21f483
3
+ size 16781751
tokenizer_config.json ADDED
@@ -0,0 +1,14 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "add_prefix_space": true,
3
+ "backend": "tokenizers",
4
+ "bos_token": "<s>",
5
+ "cls_token": "<s>",
6
+ "eos_token": "</s>",
7
+ "is_local": false,
8
+ "mask_token": "<mask>",
9
+ "model_max_length": 512,
10
+ "pad_token": "<pad>",
11
+ "sep_token": "</s>",
12
+ "tokenizer_class": "XLMRobertaTokenizer",
13
+ "unk_token": "<unk>"
14
+ }
training_args.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ce28074bafbbe140485a4cd0003003f05d85fe305e3bcbce5dfb5dc603f88645
3
+ size 4792