suku9 commited on
Commit
b115c08
·
verified ·
1 Parent(s): 153c491

End of training

Browse files
README.md ADDED
@@ -0,0 +1,71 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: transformers
3
+ license: mit
4
+ base_model: microsoft/deberta-base
5
+ tags:
6
+ - generated_from_trainer
7
+ model-index:
8
+ - name: emotion_classifier
9
+ results: []
10
+ ---
11
+
12
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
13
+ should probably proofread and complete it, then remove this comment. -->
14
+
15
+ # emotion_classifier
16
+
17
+ This model is a fine-tuned version of [microsoft/deberta-base](https://huggingface.co/microsoft/deberta-base) on an unknown dataset.
18
+ It achieves the following results on the evaluation set:
19
+ - Loss: 0.0901
20
+ - Exact Match Accuracy: 0.4616
21
+ - Precision Micro: 0.6252
22
+ - Recall Micro: 0.5498
23
+ - F1 Micro: 0.5851
24
+ - Precision Macro: 0.5584
25
+ - Recall Macro: 0.4443
26
+ - F1 Macro: 0.4783
27
+ - Classification Report: {'admiration': {'precision': 0.7021739130434783, 'recall': 0.6408730158730159, 'f1-score': 0.6701244813278008, 'support': 504.0}, 'amusement': {'precision': 0.7582781456953642, 'recall': 0.8674242424242424, 'f1-score': 0.8091872791519434, 'support': 264.0}, 'anger': {'precision': 0.5073891625615764, 'recall': 0.5202020202020202, 'f1-score': 0.513715710723192, 'support': 198.0}, 'annoyance': {'precision': 0.375886524822695, 'recall': 0.33125, 'f1-score': 0.3521594684385382, 'support': 320.0}, 'approval': {'precision': 0.5150214592274678, 'recall': 0.3418803418803419, 'f1-score': 0.410958904109589, 'support': 351.0}, 'caring': {'precision': 0.639344262295082, 'recall': 0.28888888888888886, 'f1-score': 0.3979591836734694, 'support': 135.0}, 'confusion': {'precision': 0.5686274509803921, 'recall': 0.3790849673202614, 'f1-score': 0.4549019607843137, 'support': 153.0}, 'curiosity': {'precision': 0.487012987012987, 'recall': 0.528169014084507, 'f1-score': 0.5067567567567568, 'support': 284.0}, 'desire': {'precision': 0.6206896551724138, 'recall': 0.43373493975903615, 'f1-score': 0.5106382978723404, 'support': 83.0}, 'disappointment': {'precision': 0.3673469387755102, 'recall': 0.23841059602649006, 'f1-score': 0.2891566265060241, 'support': 151.0}, 'disapproval': {'precision': 0.4388185654008439, 'recall': 0.3895131086142322, 'f1-score': 0.4126984126984127, 'support': 267.0}, 'disgust': {'precision': 0.6410256410256411, 'recall': 0.4065040650406504, 'f1-score': 0.4975124378109453, 'support': 123.0}, 'embarrassment': {'precision': 0.7777777777777778, 'recall': 0.3783783783783784, 'f1-score': 0.509090909090909, 'support': 37.0}, 'excitement': {'precision': 0.4177215189873418, 'recall': 0.32038834951456313, 'f1-score': 0.3626373626373626, 'support': 103.0}, 'fear': {'precision': 0.5977011494252874, 'recall': 0.6666666666666666, 'f1-score': 0.6303030303030303, 'support': 78.0}, 'gratitude': {'precision': 0.9382352941176471, 'recall': 0.90625, 'f1-score': 0.9219653179190751, 'support': 352.0}, 'grief': {'precision': 0.0, 'recall': 0.0, 'f1-score': 0.0, 'support': 6.0}, 'joy': {'precision': 0.6558441558441559, 'recall': 0.6273291925465838, 'f1-score': 0.6412698412698413, 'support': 161.0}, 'love': {'precision': 0.7763713080168776, 'recall': 0.773109243697479, 'f1-score': 0.7747368421052632, 'support': 238.0}, 'nervousness': {'precision': 0.3333333333333333, 'recall': 0.2608695652173913, 'f1-score': 0.2926829268292683, 'support': 23.0}, 'optimism': {'precision': 0.6716417910447762, 'recall': 0.4838709677419355, 'f1-score': 0.5625, 'support': 186.0}, 'pride': {'precision': 0.6666666666666666, 'recall': 0.125, 'f1-score': 0.21052631578947367, 'support': 16.0}, 'realization': {'precision': 0.3220338983050847, 'recall': 0.1310344827586207, 'f1-score': 0.18627450980392157, 'support': 145.0}, 'relief': {'precision': 0.5, 'recall': 0.09090909090909091, 'f1-score': 0.15384615384615385, 'support': 11.0}, 'remorse': {'precision': 0.5849056603773585, 'recall': 0.5535714285714286, 'f1-score': 0.5688073394495413, 'support': 56.0}, 'sadness': {'precision': 0.5909090909090909, 'recall': 0.5, 'f1-score': 0.5416666666666666, 'support': 156.0}, 'surprise': {'precision': 0.5113636363636364, 'recall': 0.6382978723404256, 'f1-score': 0.5678233438485805, 'support': 141.0}, 'neutral': {'precision': 0.6694915254237288, 'recall': 0.6189143816452154, 'f1-score': 0.6432102355335854, 'support': 1787.0}, 'micro avg': {'precision': 0.6252245777937477, 'recall': 0.5498498972981514, 'f1-score': 0.5851197982345523, 'support': 6329.0}, 'macro avg': {'precision': 0.5584146968787934, 'recall': 0.4443044578607666, 'f1-score': 0.4783253683909286, 'support': 6329.0}, 'weighted avg': {'precision': 0.6159179228918013, 'recall': 0.5498498972981514, 'f1-score': 0.5753626997680155, 'support': 6329.0}, 'samples avg': {'precision': 0.5807689945335053, 'recall': 0.5735366377986609, 'f1-score': 0.5644800687918433, 'support': 6329.0}}
28
+
29
+ ## Model description
30
+
31
+ More information needed
32
+
33
+ ## Intended uses & limitations
34
+
35
+ More information needed
36
+
37
+ ## Training and evaluation data
38
+
39
+ More information needed
40
+
41
+ ## Training procedure
42
+
43
+ ### Training hyperparameters
44
+
45
+ The following hyperparameters were used during training:
46
+ - learning_rate: 2e-05
47
+ - train_batch_size: 16
48
+ - eval_batch_size: 16
49
+ - seed: 42
50
+ - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
51
+ - lr_scheduler_type: linear
52
+ - num_epochs: 45
53
+
54
+ ### Training results
55
+
56
+ | Training Loss | Epoch | Step | Validation Loss | Exact Match Accuracy | Precision Micro | Recall Micro | F1 Micro | Precision Macro | Recall Macro | F1 Macro | Classification Report |
57
+ |:-------------:|:-----:|:-----:|:---------------:|:--------------------:|:---------------:|:------------:|:--------:|:---------------:|:------------:|:--------:|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|
58
+ | 0.1126 | 1.0 | 2714 | 0.0855 | 0.4565 | 0.6757 | 0.5042 | 0.5775 | 0.5644 | 0.3817 | 0.4259 | {'admiration': {'precision': 0.7566371681415929, 'recall': 0.7008196721311475, 'f1-score': 0.7276595744680852, 'support': 488.0}, 'amusement': {'precision': 0.7421203438395415, 'recall': 0.8547854785478548, 'f1-score': 0.7944785276073619, 'support': 303.0}, 'anger': {'precision': 0.6236559139784946, 'recall': 0.29743589743589743, 'f1-score': 0.4027777777777778, 'support': 195.0}, 'annoyance': {'precision': 0.4722222222222222, 'recall': 0.11221122112211221, 'f1-score': 0.18133333333333335, 'support': 303.0}, 'approval': {'precision': 0.5193370165745856, 'recall': 0.2367758186397985, 'f1-score': 0.32525951557093424, 'support': 397.0}, 'caring': {'precision': 0.5038167938931297, 'recall': 0.43137254901960786, 'f1-score': 0.4647887323943662, 'support': 153.0}, 'confusion': {'precision': 0.6363636363636364, 'recall': 0.27631578947368424, 'f1-score': 0.3853211009174312, 'support': 152.0}, 'curiosity': {'precision': 0.48109965635738833, 'recall': 0.5645161290322581, 'f1-score': 0.5194805194805194, 'support': 248.0}, 'desire': {'precision': 0.5862068965517241, 'recall': 0.44155844155844154, 'f1-score': 0.5037037037037037, 'support': 77.0}, 'disappointment': {'precision': 0.8, 'recall': 0.024539877300613498, 'f1-score': 0.047619047619047616, 'support': 163.0}, 'disapproval': {'precision': 0.5703125, 'recall': 0.25, 'f1-score': 0.3476190476190476, 'support': 292.0}, 'disgust': {'precision': 0.5102040816326531, 'recall': 0.25773195876288657, 'f1-score': 0.3424657534246575, 'support': 97.0}, 'embarrassment': {'precision': 0.9411764705882353, 'recall': 0.45714285714285713, 'f1-score': 0.6153846153846154, 'support': 35.0}, 'excitement': {'precision': 0.64, 'recall': 0.16666666666666666, 'f1-score': 0.2644628099173554, 'support': 96.0}, 'fear': {'precision': 0.8367346938775511, 'recall': 0.45555555555555555, 'f1-score': 0.5899280575539568, 'support': 90.0}, 'gratitude': {'precision': 0.9401197604790419, 'recall': 0.8770949720670391, 'f1-score': 0.9075144508670521, 'support': 358.0}, 'grief': {'precision': 0.0, 'recall': 0.0, 'f1-score': 0.0, 'support': 13.0}, 'joy': {'precision': 0.5759493670886076, 'recall': 0.5290697674418605, 'f1-score': 0.5515151515151515, 'support': 172.0}, 'love': {'precision': 0.7231833910034602, 'recall': 0.8293650793650794, 'f1-score': 0.7726432532347505, 'support': 252.0}, 'nervousness': {'precision': 0.0, 'recall': 0.0, 'f1-score': 0.0, 'support': 21.0}, 'optimism': {'precision': 0.7215189873417721, 'recall': 0.5454545454545454, 'f1-score': 0.6212534059945504, 'support': 209.0}, 'pride': {'precision': 0.0, 'recall': 0.0, 'f1-score': 0.0, 'support': 15.0}, 'realization': {'precision': 0.84, 'recall': 0.16535433070866143, 'f1-score': 0.27631578947368424, 'support': 127.0}, 'relief': {'precision': 0.0, 'recall': 0.0, 'f1-score': 0.0, 'support': 18.0}, 'remorse': {'precision': 0.65, 'recall': 0.5735294117647058, 'f1-score': 0.609375, 'support': 68.0}, 'sadness': {'precision': 0.5409836065573771, 'recall': 0.46153846153846156, 'f1-score': 0.4981132075471698, 'support': 143.0}, 'surprise': {'precision': 0.4935064935064935, 'recall': 0.5891472868217055, 'f1-score': 0.5371024734982333, 'support': 129.0}, 'neutral': {'precision': 0.6976588628762542, 'recall': 0.5906002265005662, 'f1-score': 0.6396810794234897, 'support': 1766.0}, 'micro avg': {'precision': 0.6756983826927117, 'recall': 0.5042319749216301, 'f1-score': 0.5775065074948389, 'support': 6380.0}, 'macro avg': {'precision': 0.5643859951026344, 'recall': 0.38173507121614314, 'f1-score': 0.4259212831545099, 'support': 6380.0}, 'weighted avg': {'precision': 0.6601848057578757, 'recall': 0.5042319749216301, 'f1-score': 0.5469066300328769, 'support': 6380.0}, 'samples avg': {'precision': 0.5558115247573412, 'recall': 0.531161690625384, 'f1-score': 0.5336712126796903, 'support': 6380.0}} |
59
+ | 0.0782 | 2.0 | 5428 | 0.0879 | 0.4548 | 0.6364 | 0.5237 | 0.5745 | 0.5350 | 0.4147 | 0.4515 | {'admiration': {'precision': 0.6186708860759493, 'recall': 0.8012295081967213, 'f1-score': 0.6982142857142857, 'support': 488.0}, 'amusement': {'precision': 0.7597402597402597, 'recall': 0.7722772277227723, 'f1-score': 0.7659574468085106, 'support': 303.0}, 'anger': {'precision': 0.49504950495049505, 'recall': 0.5128205128205128, 'f1-score': 0.5037783375314862, 'support': 195.0}, 'annoyance': {'precision': 0.4666666666666667, 'recall': 0.25412541254125415, 'f1-score': 0.32905982905982906, 'support': 303.0}, 'approval': {'precision': 0.4605263157894737, 'recall': 0.26448362720403024, 'f1-score': 0.336, 'support': 397.0}, 'caring': {'precision': 0.5535714285714286, 'recall': 0.40522875816993464, 'f1-score': 0.4679245283018868, 'support': 153.0}, 'confusion': {'precision': 0.5, 'recall': 0.3815789473684211, 'f1-score': 0.43283582089552236, 'support': 152.0}, 'curiosity': {'precision': 0.5240384615384616, 'recall': 0.43951612903225806, 'f1-score': 0.4780701754385965, 'support': 248.0}, 'desire': {'precision': 0.6333333333333333, 'recall': 0.4935064935064935, 'f1-score': 0.5547445255474452, 'support': 77.0}, 'disappointment': {'precision': 0.410958904109589, 'recall': 0.18404907975460122, 'f1-score': 0.2542372881355932, 'support': 163.0}, 'disapproval': {'precision': 0.4948453608247423, 'recall': 0.3287671232876712, 'f1-score': 0.3950617283950617, 'support': 292.0}, 'disgust': {'precision': 0.4457831325301205, 'recall': 0.38144329896907214, 'f1-score': 0.4111111111111111, 'support': 97.0}, 'embarrassment': {'precision': 0.8, 'recall': 0.45714285714285713, 'f1-score': 0.5818181818181818, 'support': 35.0}, 'excitement': {'precision': 0.45454545454545453, 'recall': 0.3125, 'f1-score': 0.37037037037037035, 'support': 96.0}, 'fear': {'precision': 0.7121212121212122, 'recall': 0.5222222222222223, 'f1-score': 0.6025641025641025, 'support': 90.0}, 'gratitude': {'precision': 0.9437869822485208, 'recall': 0.8910614525139665, 'f1-score': 0.9166666666666666, 'support': 358.0}, 'grief': {'precision': 0.0, 'recall': 0.0, 'f1-score': 0.0, 'support': 13.0}, 'joy': {'precision': 0.5621301775147929, 'recall': 0.5523255813953488, 'f1-score': 0.5571847507331378, 'support': 172.0}, 'love': {'precision': 0.7413793103448276, 'recall': 0.8531746031746031, 'f1-score': 0.7933579335793358, 'support': 252.0}, 'nervousness': {'precision': 0.0, 'recall': 0.0, 'f1-score': 0.0, 'support': 21.0}, 'optimism': {'precision': 0.6981132075471698, 'recall': 0.5311004784688995, 'f1-score': 0.6032608695652174, 'support': 209.0}, 'pride': {'precision': 0.75, 'recall': 0.2, 'f1-score': 0.3157894736842105, 'support': 15.0}, 'realization': {'precision': 0.5789473684210527, 'recall': 0.1732283464566929, 'f1-score': 0.26666666666666666, 'support': 127.0}, 'relief': {'precision': 0.0, 'recall': 0.0, 'f1-score': 0.0, 'support': 18.0}, 'remorse': {'precision': 0.6363636363636364, 'recall': 0.20588235294117646, 'f1-score': 0.3111111111111111, 'support': 68.0}, 'sadness': {'precision': 0.4607329842931937, 'recall': 0.6153846153846154, 'f1-score': 0.5269461077844312, 'support': 143.0}, 'surprise': {'precision': 0.576271186440678, 'recall': 0.5271317829457365, 'f1-score': 0.5506072874493927, 'support': 129.0}, 'neutral': {'precision': 0.7031700288184438, 'recall': 0.5526613816534541, 'f1-score': 0.6188966391883323, 'support': 1766.0}, 'micro avg': {'precision': 0.6363809523809524, 'recall': 0.523667711598746, 'f1-score': 0.574548581255374, 'support': 6380.0}, 'macro avg': {'precision': 0.5350266358139107, 'recall': 0.4147443497454755, 'f1-score': 0.45150840136144593, 'support': 6380.0}, 'weighted avg': {'precision': 0.6207079623001979, 'recall': 0.523667711598746, 'f1-score': 0.5577988370571888, 'support': 6380.0}, 'samples avg': {'precision': 0.5685895073104804, 'recall': 0.5473184666420936, 'f1-score': 0.546484299580503, 'support': 6380.0}} |
60
+ | 0.0657 | 3.0 | 8142 | 0.0904 | 0.4642 | 0.6287 | 0.5544 | 0.5892 | 0.5876 | 0.4548 | 0.4924 | {'admiration': {'precision': 0.7203389830508474, 'recall': 0.6967213114754098, 'f1-score': 0.7083333333333334, 'support': 488.0}, 'amusement': {'precision': 0.7391304347826086, 'recall': 0.8415841584158416, 'f1-score': 0.7870370370370371, 'support': 303.0}, 'anger': {'precision': 0.53125, 'recall': 0.5230769230769231, 'f1-score': 0.5271317829457365, 'support': 195.0}, 'annoyance': {'precision': 0.416988416988417, 'recall': 0.3564356435643564, 'f1-score': 0.38434163701067614, 'support': 303.0}, 'approval': {'precision': 0.4690265486725664, 'recall': 0.26700251889168763, 'f1-score': 0.3402889245585875, 'support': 397.0}, 'caring': {'precision': 0.6129032258064516, 'recall': 0.37254901960784315, 'f1-score': 0.4634146341463415, 'support': 153.0}, 'confusion': {'precision': 0.6086956521739131, 'recall': 0.3684210526315789, 'f1-score': 0.45901639344262296, 'support': 152.0}, 'curiosity': {'precision': 0.46206896551724136, 'recall': 0.5403225806451613, 'f1-score': 0.49814126394052044, 'support': 248.0}, 'desire': {'precision': 0.6190476190476191, 'recall': 0.5064935064935064, 'f1-score': 0.5571428571428572, 'support': 77.0}, 'disappointment': {'precision': 0.43137254901960786, 'recall': 0.26993865030674846, 'f1-score': 0.3320754716981132, 'support': 163.0}, 'disapproval': {'precision': 0.45064377682403434, 'recall': 0.3595890410958904, 'f1-score': 0.4, 'support': 292.0}, 'disgust': {'precision': 0.5405405405405406, 'recall': 0.41237113402061853, 'f1-score': 0.4678362573099415, 'support': 97.0}, 'embarrassment': {'precision': 0.6923076923076923, 'recall': 0.5142857142857142, 'f1-score': 0.5901639344262295, 'support': 35.0}, 'excitement': {'precision': 0.42857142857142855, 'recall': 0.3125, 'f1-score': 0.3614457831325301, 'support': 96.0}, 'fear': {'precision': 0.6857142857142857, 'recall': 0.5333333333333333, 'f1-score': 0.6, 'support': 90.0}, 'gratitude': {'precision': 0.9300291545189504, 'recall': 0.8910614525139665, 'f1-score': 0.9101283880171184, 'support': 358.0}, 'grief': {'precision': 0.0, 'recall': 0.0, 'f1-score': 0.0, 'support': 13.0}, 'joy': {'precision': 0.5695364238410596, 'recall': 0.5, 'f1-score': 0.5325077399380805, 'support': 172.0}, 'love': {'precision': 0.7674418604651163, 'recall': 0.7857142857142857, 'f1-score': 0.7764705882352941, 'support': 252.0}, 'nervousness': {'precision': 0.4166666666666667, 'recall': 0.23809523809523808, 'f1-score': 0.30303030303030304, 'support': 21.0}, 'optimism': {'precision': 0.6792452830188679, 'recall': 0.5167464114832536, 'f1-score': 0.5869565217391305, 'support': 209.0}, 'pride': {'precision': 0.8333333333333334, 'recall': 0.3333333333333333, 'f1-score': 0.47619047619047616, 'support': 15.0}, 'realization': {'precision': 0.42857142857142855, 'recall': 0.2125984251968504, 'f1-score': 0.28421052631578947, 'support': 127.0}, 'relief': {'precision': 1.0, 'recall': 0.05555555555555555, 'f1-score': 0.10526315789473684, 'support': 18.0}, 'remorse': {'precision': 0.7391304347826086, 'recall': 0.5, 'f1-score': 0.5964912280701754, 'support': 68.0}, 'sadness': {'precision': 0.5314685314685315, 'recall': 0.5314685314685315, 'f1-score': 0.5314685314685315, 'support': 143.0}, 'surprise': {'precision': 0.48044692737430167, 'recall': 0.6666666666666666, 'f1-score': 0.5584415584415584, 'support': 129.0}, 'neutral': {'precision': 0.669481302774427, 'recall': 0.6285390713476784, 'f1-score': 0.6483644859813084, 'support': 1766.0}, 'micro avg': {'precision': 0.6286882332029862, 'recall': 0.5543887147335423, 'f1-score': 0.5892053973013494, 'support': 6380.0}, 'macro avg': {'precision': 0.5876411237797338, 'recall': 0.454800127114999, 'f1-score': 0.49235331483739386, 'support': 6380.0}, 'weighted avg': {'precision': 0.6194504501264384, 'recall': 0.5543887147335423, 'f1-score': 0.5787029045585259, 'support': 6380.0}, 'samples avg': {'precision': 0.5884629561371176, 'recall': 0.5802309866076913, 'f1-score': 0.5711635336036368, 'support': 6380.0}} |
61
+ | 0.0537 | 4.0 | 10856 | 0.1009 | 0.4596 | 0.6020 | 0.5602 | 0.5803 | 0.5355 | 0.4616 | 0.4835 | {'admiration': {'precision': 0.6678260869565218, 'recall': 0.7868852459016393, 'f1-score': 0.7224835371589841, 'support': 488.0}, 'amusement': {'precision': 0.7100271002710027, 'recall': 0.8646864686468647, 'f1-score': 0.7797619047619048, 'support': 303.0}, 'anger': {'precision': 0.6551724137931034, 'recall': 0.38974358974358975, 'f1-score': 0.4887459807073955, 'support': 195.0}, 'annoyance': {'precision': 0.4825174825174825, 'recall': 0.22772277227722773, 'f1-score': 0.3094170403587444, 'support': 303.0}, 'approval': {'precision': 0.36342592592592593, 'recall': 0.3954659949622166, 'f1-score': 0.37876960193003617, 'support': 397.0}, 'caring': {'precision': 0.5470085470085471, 'recall': 0.41830065359477125, 'f1-score': 0.4740740740740741, 'support': 153.0}, 'confusion': {'precision': 0.37158469945355194, 'recall': 0.4473684210526316, 'f1-score': 0.4059701492537313, 'support': 152.0}, 'curiosity': {'precision': 0.47577092511013214, 'recall': 0.43548387096774194, 'f1-score': 0.45473684210526316, 'support': 248.0}, 'desire': {'precision': 0.5588235294117647, 'recall': 0.4935064935064935, 'f1-score': 0.5241379310344828, 'support': 77.0}, 'disappointment': {'precision': 0.37037037037037035, 'recall': 0.24539877300613497, 'f1-score': 0.2952029520295203, 'support': 163.0}, 'disapproval': {'precision': 0.4838709677419355, 'recall': 0.2568493150684932, 'f1-score': 0.33557046979865773, 'support': 292.0}, 'disgust': {'precision': 0.5, 'recall': 0.41237113402061853, 'f1-score': 0.4519774011299435, 'support': 97.0}, 'embarrassment': {'precision': 0.6785714285714286, 'recall': 0.5428571428571428, 'f1-score': 0.6031746031746031, 'support': 35.0}, 'excitement': {'precision': 0.46153846153846156, 'recall': 0.3125, 'f1-score': 0.37267080745341613, 'support': 96.0}, 'fear': {'precision': 0.7794117647058824, 'recall': 0.5888888888888889, 'f1-score': 0.6708860759493671, 'support': 90.0}, 'gratitude': {'precision': 0.9093484419263456, 'recall': 0.8966480446927374, 'f1-score': 0.9029535864978903, 'support': 358.0}, 'grief': {'precision': 0.0, 'recall': 0.0, 'f1-score': 0.0, 'support': 13.0}, 'joy': {'precision': 0.5549132947976878, 'recall': 0.5581395348837209, 'f1-score': 0.5565217391304348, 'support': 172.0}, 'love': {'precision': 0.7285714285714285, 'recall': 0.8095238095238095, 'f1-score': 0.7669172932330827, 'support': 252.0}, 'nervousness': {'precision': 0.4, 'recall': 0.09523809523809523, 'f1-score': 0.15384615384615385, 'support': 21.0}, 'optimism': {'precision': 0.5821596244131455, 'recall': 0.5933014354066986, 'f1-score': 0.5876777251184834, 'support': 209.0}, 'pride': {'precision': 0.8571428571428571, 'recall': 0.4, 'f1-score': 0.5454545454545454, 'support': 15.0}, 'realization': {'precision': 0.2076923076923077, 'recall': 0.2125984251968504, 'f1-score': 0.21011673151750973, 'support': 127.0}, 'relief': {'precision': 0.2, 'recall': 0.05555555555555555, 'f1-score': 0.08695652173913043, 'support': 18.0}, 'remorse': {'precision': 0.6790123456790124, 'recall': 0.8088235294117647, 'f1-score': 0.738255033557047, 'support': 68.0}, 'sadness': {'precision': 0.5546875, 'recall': 0.4965034965034965, 'f1-score': 0.5239852398523985, 'support': 143.0}, 'surprise': {'precision': 0.5590551181102362, 'recall': 0.5503875968992248, 'f1-score': 0.5546875, 'support': 129.0}, 'neutral': {'precision': 0.654320987654321, 'recall': 0.630237825594564, 'f1-score': 0.6420536486876262, 'support': 1766.0}, 'micro avg': {'precision': 0.6019875357924878, 'recall': 0.5601880877742946, 'f1-score': 0.5803361208086385, 'support': 6380.0}, 'macro avg': {'precision': 0.5354579860486948, 'recall': 0.4616066469071776, 'f1-score': 0.4834644674840867, 'support': 6380.0}, 'weighted avg': {'precision': 0.592394586579407, 'recall': 0.5601880877742946, 'f1-score': 0.5686740247500924, 'support': 6380.0}, 'samples avg': {'precision': 0.5820125322521195, 'recall': 0.5823196952942622, 'f1-score': 0.5690054938304109, 'support': 6380.0}} |
62
+ | 0.0434 | 5.0 | 13570 | 0.1089 | 0.4228 | 0.5765 | 0.5373 | 0.5562 | 0.5047 | 0.4658 | 0.4732 | {'admiration': {'precision': 0.6920077972709552, 'recall': 0.7274590163934426, 'f1-score': 0.7092907092907093, 'support': 488.0}, 'amusement': {'precision': 0.745398773006135, 'recall': 0.801980198019802, 'f1-score': 0.7726550079491256, 'support': 303.0}, 'anger': {'precision': 0.47549019607843135, 'recall': 0.49743589743589745, 'f1-score': 0.48621553884711777, 'support': 195.0}, 'annoyance': {'precision': 0.3623693379790941, 'recall': 0.3432343234323432, 'f1-score': 0.3525423728813559, 'support': 303.0}, 'approval': {'precision': 0.37388724035608306, 'recall': 0.31738035264483627, 'f1-score': 0.34332425068119893, 'support': 397.0}, 'caring': {'precision': 0.48872180451127817, 'recall': 0.42483660130718953, 'f1-score': 0.45454545454545453, 'support': 153.0}, 'confusion': {'precision': 0.463768115942029, 'recall': 0.42105263157894735, 'f1-score': 0.4413793103448276, 'support': 152.0}, 'curiosity': {'precision': 0.4386503067484663, 'recall': 0.5766129032258065, 'f1-score': 0.49825783972125437, 'support': 248.0}, 'desire': {'precision': 0.5185185185185185, 'recall': 0.5454545454545454, 'f1-score': 0.5316455696202531, 'support': 77.0}, 'disappointment': {'precision': 0.32456140350877194, 'recall': 0.22699386503067484, 'f1-score': 0.26714801444043323, 'support': 163.0}, 'disapproval': {'precision': 0.38028169014084506, 'recall': 0.3698630136986301, 'f1-score': 0.375, 'support': 292.0}, 'disgust': {'precision': 0.4434782608695652, 'recall': 0.5257731958762887, 'f1-score': 0.4811320754716981, 'support': 97.0}, 'embarrassment': {'precision': 0.6333333333333333, 'recall': 0.5428571428571428, 'f1-score': 0.5846153846153846, 'support': 35.0}, 'excitement': {'precision': 0.35294117647058826, 'recall': 0.375, 'f1-score': 0.36363636363636365, 'support': 96.0}, 'fear': {'precision': 0.7083333333333334, 'recall': 0.5666666666666667, 'f1-score': 0.6296296296296297, 'support': 90.0}, 'gratitude': {'precision': 0.9300291545189504, 'recall': 0.8910614525139665, 'f1-score': 0.9101283880171184, 'support': 358.0}, 'grief': {'precision': 0.0, 'recall': 0.0, 'f1-score': 0.0, 'support': 13.0}, 'joy': {'precision': 0.5448717948717948, 'recall': 0.4941860465116279, 'f1-score': 0.5182926829268293, 'support': 172.0}, 'love': {'precision': 0.67601246105919, 'recall': 0.8611111111111112, 'f1-score': 0.7574171029668412, 'support': 252.0}, 'nervousness': {'precision': 0.4, 'recall': 0.09523809523809523, 'f1-score': 0.15384615384615385, 'support': 21.0}, 'optimism': {'precision': 0.5435684647302904, 'recall': 0.6267942583732058, 'f1-score': 0.5822222222222222, 'support': 209.0}, 'pride': {'precision': 0.75, 'recall': 0.2, 'f1-score': 0.3157894736842105, 'support': 15.0}, 'realization': {'precision': 0.19424460431654678, 'recall': 0.2125984251968504, 'f1-score': 0.20300751879699247, 'support': 127.0}, 'relief': {'precision': 0.18181818181818182, 'recall': 0.1111111111111111, 'f1-score': 0.13793103448275862, 'support': 18.0}, 'remorse': {'precision': 0.7361111111111112, 'recall': 0.7794117647058824, 'f1-score': 0.7571428571428571, 'support': 68.0}, 'sadness': {'precision': 0.5068493150684932, 'recall': 0.5174825174825175, 'f1-score': 0.5121107266435986, 'support': 143.0}, 'surprise': {'precision': 0.5865384615384616, 'recall': 0.4728682170542636, 'f1-score': 0.5236051502145923, 'support': 129.0}, 'neutral': {'precision': 0.680327868852459, 'recall': 0.5169875424688561, 'f1-score': 0.5875160875160875, 'support': 1766.0}, 'micro avg': {'precision': 0.5765220316178944, 'recall': 0.5373040752351097, 'f1-score': 0.556222618854454, 'support': 6380.0}, 'macro avg': {'precision': 0.5047183109268895, 'recall': 0.46576610340677504, 'f1-score': 0.47321524714768104, 'support': 6380.0}, 'weighted avg': {'precision': 0.5788599597470344, 'recall': 0.5373040752351097, 'f1-score': 0.552452450376, 'support': 6380.0}, 'samples avg': {'precision': 0.5527091780316993, 'recall': 0.5559343899741983, 'f1-score': 0.5401874572165762, 'support': 6380.0}} |
63
+ | 0.0349 | 6.0 | 16284 | 0.1183 | 0.4423 | 0.5841 | 0.5630 | 0.5733 | 0.5372 | 0.4850 | 0.4930 | {'admiration': {'precision': 0.6482982171799028, 'recall': 0.819672131147541, 'f1-score': 0.7239819004524887, 'support': 488.0}, 'amusement': {'precision': 0.7386363636363636, 'recall': 0.858085808580858, 'f1-score': 0.7938931297709924, 'support': 303.0}, 'anger': {'precision': 0.5, 'recall': 0.5025641025641026, 'f1-score': 0.5012787723785166, 'support': 195.0}, 'annoyance': {'precision': 0.40969162995594716, 'recall': 0.3069306930693069, 'f1-score': 0.35094339622641507, 'support': 303.0}, 'approval': {'precision': 0.3697916666666667, 'recall': 0.35768261964735515, 'f1-score': 0.36363636363636365, 'support': 397.0}, 'caring': {'precision': 0.5109489051094891, 'recall': 0.45751633986928103, 'f1-score': 0.4827586206896552, 'support': 153.0}, 'confusion': {'precision': 0.4676258992805755, 'recall': 0.4276315789473684, 'f1-score': 0.44673539518900346, 'support': 152.0}, 'curiosity': {'precision': 0.48188405797101447, 'recall': 0.5362903225806451, 'f1-score': 0.5076335877862596, 'support': 248.0}, 'desire': {'precision': 0.4270833333333333, 'recall': 0.5324675324675324, 'f1-score': 0.47398843930635837, 'support': 77.0}, 'disappointment': {'precision': 0.336283185840708, 'recall': 0.2331288343558282, 'f1-score': 0.2753623188405797, 'support': 163.0}, 'disapproval': {'precision': 0.3665594855305466, 'recall': 0.3904109589041096, 'f1-score': 0.3781094527363184, 'support': 292.0}, 'disgust': {'precision': 0.5, 'recall': 0.4020618556701031, 'f1-score': 0.44571428571428573, 'support': 97.0}, 'embarrassment': {'precision': 0.59375, 'recall': 0.5428571428571428, 'f1-score': 0.5671641791044776, 'support': 35.0}, 'excitement': {'precision': 0.3305785123966942, 'recall': 0.4166666666666667, 'f1-score': 0.3686635944700461, 'support': 96.0}, 'fear': {'precision': 0.7058823529411765, 'recall': 0.5333333333333333, 'f1-score': 0.6075949367088608, 'support': 90.0}, 'gratitude': {'precision': 0.9169054441260746, 'recall': 0.8938547486033519, 'f1-score': 0.9052333804809052, 'support': 358.0}, 'grief': {'precision': 1.0, 'recall': 0.15384615384615385, 'f1-score': 0.26666666666666666, 'support': 13.0}, 'joy': {'precision': 0.5735294117647058, 'recall': 0.45348837209302323, 'f1-score': 0.5064935064935064, 'support': 172.0}, 'love': {'precision': 0.7420494699646644, 'recall': 0.8333333333333334, 'f1-score': 0.7850467289719626, 'support': 252.0}, 'nervousness': {'precision': 0.3333333333333333, 'recall': 0.14285714285714285, 'f1-score': 0.2, 'support': 21.0}, 'optimism': {'precision': 0.585, 'recall': 0.5598086124401914, 'f1-score': 0.5721271393643031, 'support': 209.0}, 'pride': {'precision': 0.7142857142857143, 'recall': 0.3333333333333333, 'f1-score': 0.45454545454545453, 'support': 15.0}, 'realization': {'precision': 0.25225225225225223, 'recall': 0.2204724409448819, 'f1-score': 0.23529411764705882, 'support': 127.0}, 'relief': {'precision': 0.13333333333333333, 'recall': 0.1111111111111111, 'f1-score': 0.12121212121212122, 'support': 18.0}, 'remorse': {'precision': 0.6867469879518072, 'recall': 0.8382352941176471, 'f1-score': 0.7549668874172185, 'support': 68.0}, 'sadness': {'precision': 0.5298013245033113, 'recall': 0.5594405594405595, 'f1-score': 0.54421768707483, 'support': 143.0}, 'surprise': {'precision': 0.5170068027210885, 'recall': 0.5891472868217055, 'f1-score': 0.5507246376811594, 'support': 129.0}, 'neutral': {'precision': 0.671523178807947, 'recall': 0.5741789354473387, 'f1-score': 0.6190476190476191, 'support': 1766.0}, 'micro avg': {'precision': 0.5840650406504065, 'recall': 0.5630094043887147, 'f1-score': 0.5733439744612929, 'support': 6380.0}, 'macro avg': {'precision': 0.5372421736745232, 'recall': 0.48501454446610526, 'f1-score': 0.49296551141476525, 'support': 6380.0}, 'weighted avg': {'precision': 0.5819808943352951, 'recall': 0.5630094043887147, 'f1-score': 0.5679464375851392, 'support': 6380.0}, 'samples avg': {'precision': 0.5731048040299792, 'recall': 0.58474628332719, 'f1-score': 0.564147929721096, 'support': 6380.0}} |
64
+
65
+
66
+ ### Framework versions
67
+
68
+ - Transformers 4.50.3
69
+ - Pytorch 2.6.0+cu124
70
+ - Datasets 3.5.0
71
+ - Tokenizers 0.21.1
config.json ADDED
@@ -0,0 +1,93 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "architectures": [
3
+ "DebertaForSequenceClassification"
4
+ ],
5
+ "attention_probs_dropout_prob": 0.1,
6
+ "hidden_act": "gelu",
7
+ "hidden_dropout_prob": 0.1,
8
+ "hidden_size": 768,
9
+ "id2label": {
10
+ "0": "LABEL_0",
11
+ "1": "LABEL_1",
12
+ "2": "LABEL_2",
13
+ "3": "LABEL_3",
14
+ "4": "LABEL_4",
15
+ "5": "LABEL_5",
16
+ "6": "LABEL_6",
17
+ "7": "LABEL_7",
18
+ "8": "LABEL_8",
19
+ "9": "LABEL_9",
20
+ "10": "LABEL_10",
21
+ "11": "LABEL_11",
22
+ "12": "LABEL_12",
23
+ "13": "LABEL_13",
24
+ "14": "LABEL_14",
25
+ "15": "LABEL_15",
26
+ "16": "LABEL_16",
27
+ "17": "LABEL_17",
28
+ "18": "LABEL_18",
29
+ "19": "LABEL_19",
30
+ "20": "LABEL_20",
31
+ "21": "LABEL_21",
32
+ "22": "LABEL_22",
33
+ "23": "LABEL_23",
34
+ "24": "LABEL_24",
35
+ "25": "LABEL_25",
36
+ "26": "LABEL_26",
37
+ "27": "LABEL_27"
38
+ },
39
+ "initializer_range": 0.02,
40
+ "intermediate_size": 3072,
41
+ "label2id": {
42
+ "LABEL_0": 0,
43
+ "LABEL_1": 1,
44
+ "LABEL_10": 10,
45
+ "LABEL_11": 11,
46
+ "LABEL_12": 12,
47
+ "LABEL_13": 13,
48
+ "LABEL_14": 14,
49
+ "LABEL_15": 15,
50
+ "LABEL_16": 16,
51
+ "LABEL_17": 17,
52
+ "LABEL_18": 18,
53
+ "LABEL_19": 19,
54
+ "LABEL_2": 2,
55
+ "LABEL_20": 20,
56
+ "LABEL_21": 21,
57
+ "LABEL_22": 22,
58
+ "LABEL_23": 23,
59
+ "LABEL_24": 24,
60
+ "LABEL_25": 25,
61
+ "LABEL_26": 26,
62
+ "LABEL_27": 27,
63
+ "LABEL_3": 3,
64
+ "LABEL_4": 4,
65
+ "LABEL_5": 5,
66
+ "LABEL_6": 6,
67
+ "LABEL_7": 7,
68
+ "LABEL_8": 8,
69
+ "LABEL_9": 9
70
+ },
71
+ "layer_norm_eps": 1e-07,
72
+ "legacy": true,
73
+ "max_position_embeddings": 512,
74
+ "max_relative_positions": -1,
75
+ "model_type": "deberta",
76
+ "num_attention_heads": 12,
77
+ "num_hidden_layers": 12,
78
+ "pad_token_id": 0,
79
+ "pooler_dropout": 0,
80
+ "pooler_hidden_act": "gelu",
81
+ "pooler_hidden_size": 768,
82
+ "pos_att_type": [
83
+ "c2p",
84
+ "p2c"
85
+ ],
86
+ "position_biased_input": false,
87
+ "problem_type": "multi_label_classification",
88
+ "relative_attention": true,
89
+ "torch_dtype": "float32",
90
+ "transformers_version": "4.50.3",
91
+ "type_vocab_size": 0,
92
+ "vocab_size": 50265
93
+ }
merges.txt ADDED
The diff for this file is too large to render. See raw diff
 
metrics/summary.txt ADDED
@@ -0,0 +1,17 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Model: microsoft/deberta-base
2
+ Training completed on: 2025-04-06 00:27:38
3
+ Results directory: ./results/20250406_000832
4
+ TensorBoard logs: tensorboard --logdir=./results/20250406_000832
5
+ View results with: python mode.py --hub_model_id suku9/emotion_classifier --view_results ./results/20250406_000832
6
+
7
+ === Train Results ===
8
+ Micro F1: 0.7785
9
+ Macro F1: 0.6870
10
+
11
+ === Validation Results ===
12
+ Micro F1: 0.5892
13
+ Macro F1: 0.4924
14
+
15
+ === Test Results ===
16
+ Micro F1: 0.5851
17
+ Macro F1: 0.4783
metrics/test_results.json ADDED
@@ -0,0 +1,209 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "eval_loss": 0.09008365124464035,
3
+ "eval_exact_match_accuracy": 0.4615809839690437,
4
+ "eval_precision_micro": 0.6252245777937477,
5
+ "eval_recall_micro": 0.5498498972981514,
6
+ "eval_f1_micro": 0.5851197982345523,
7
+ "eval_precision_macro": 0.5584146968787934,
8
+ "eval_recall_macro": 0.4443044578607666,
9
+ "eval_f1_macro": 0.4783253683909286,
10
+ "eval_classification_report": {
11
+ "admiration": {
12
+ "precision": 0.7021739130434783,
13
+ "recall": 0.6408730158730159,
14
+ "f1-score": 0.6701244813278008,
15
+ "support": 504.0
16
+ },
17
+ "amusement": {
18
+ "precision": 0.7582781456953642,
19
+ "recall": 0.8674242424242424,
20
+ "f1-score": 0.8091872791519434,
21
+ "support": 264.0
22
+ },
23
+ "anger": {
24
+ "precision": 0.5073891625615764,
25
+ "recall": 0.5202020202020202,
26
+ "f1-score": 0.513715710723192,
27
+ "support": 198.0
28
+ },
29
+ "annoyance": {
30
+ "precision": 0.375886524822695,
31
+ "recall": 0.33125,
32
+ "f1-score": 0.3521594684385382,
33
+ "support": 320.0
34
+ },
35
+ "approval": {
36
+ "precision": 0.5150214592274678,
37
+ "recall": 0.3418803418803419,
38
+ "f1-score": 0.410958904109589,
39
+ "support": 351.0
40
+ },
41
+ "caring": {
42
+ "precision": 0.639344262295082,
43
+ "recall": 0.28888888888888886,
44
+ "f1-score": 0.3979591836734694,
45
+ "support": 135.0
46
+ },
47
+ "confusion": {
48
+ "precision": 0.5686274509803921,
49
+ "recall": 0.3790849673202614,
50
+ "f1-score": 0.4549019607843137,
51
+ "support": 153.0
52
+ },
53
+ "curiosity": {
54
+ "precision": 0.487012987012987,
55
+ "recall": 0.528169014084507,
56
+ "f1-score": 0.5067567567567568,
57
+ "support": 284.0
58
+ },
59
+ "desire": {
60
+ "precision": 0.6206896551724138,
61
+ "recall": 0.43373493975903615,
62
+ "f1-score": 0.5106382978723404,
63
+ "support": 83.0
64
+ },
65
+ "disappointment": {
66
+ "precision": 0.3673469387755102,
67
+ "recall": 0.23841059602649006,
68
+ "f1-score": 0.2891566265060241,
69
+ "support": 151.0
70
+ },
71
+ "disapproval": {
72
+ "precision": 0.4388185654008439,
73
+ "recall": 0.3895131086142322,
74
+ "f1-score": 0.4126984126984127,
75
+ "support": 267.0
76
+ },
77
+ "disgust": {
78
+ "precision": 0.6410256410256411,
79
+ "recall": 0.4065040650406504,
80
+ "f1-score": 0.4975124378109453,
81
+ "support": 123.0
82
+ },
83
+ "embarrassment": {
84
+ "precision": 0.7777777777777778,
85
+ "recall": 0.3783783783783784,
86
+ "f1-score": 0.509090909090909,
87
+ "support": 37.0
88
+ },
89
+ "excitement": {
90
+ "precision": 0.4177215189873418,
91
+ "recall": 0.32038834951456313,
92
+ "f1-score": 0.3626373626373626,
93
+ "support": 103.0
94
+ },
95
+ "fear": {
96
+ "precision": 0.5977011494252874,
97
+ "recall": 0.6666666666666666,
98
+ "f1-score": 0.6303030303030303,
99
+ "support": 78.0
100
+ },
101
+ "gratitude": {
102
+ "precision": 0.9382352941176471,
103
+ "recall": 0.90625,
104
+ "f1-score": 0.9219653179190751,
105
+ "support": 352.0
106
+ },
107
+ "grief": {
108
+ "precision": 0.0,
109
+ "recall": 0.0,
110
+ "f1-score": 0.0,
111
+ "support": 6.0
112
+ },
113
+ "joy": {
114
+ "precision": 0.6558441558441559,
115
+ "recall": 0.6273291925465838,
116
+ "f1-score": 0.6412698412698413,
117
+ "support": 161.0
118
+ },
119
+ "love": {
120
+ "precision": 0.7763713080168776,
121
+ "recall": 0.773109243697479,
122
+ "f1-score": 0.7747368421052632,
123
+ "support": 238.0
124
+ },
125
+ "nervousness": {
126
+ "precision": 0.3333333333333333,
127
+ "recall": 0.2608695652173913,
128
+ "f1-score": 0.2926829268292683,
129
+ "support": 23.0
130
+ },
131
+ "optimism": {
132
+ "precision": 0.6716417910447762,
133
+ "recall": 0.4838709677419355,
134
+ "f1-score": 0.5625,
135
+ "support": 186.0
136
+ },
137
+ "pride": {
138
+ "precision": 0.6666666666666666,
139
+ "recall": 0.125,
140
+ "f1-score": 0.21052631578947367,
141
+ "support": 16.0
142
+ },
143
+ "realization": {
144
+ "precision": 0.3220338983050847,
145
+ "recall": 0.1310344827586207,
146
+ "f1-score": 0.18627450980392157,
147
+ "support": 145.0
148
+ },
149
+ "relief": {
150
+ "precision": 0.5,
151
+ "recall": 0.09090909090909091,
152
+ "f1-score": 0.15384615384615385,
153
+ "support": 11.0
154
+ },
155
+ "remorse": {
156
+ "precision": 0.5849056603773585,
157
+ "recall": 0.5535714285714286,
158
+ "f1-score": 0.5688073394495413,
159
+ "support": 56.0
160
+ },
161
+ "sadness": {
162
+ "precision": 0.5909090909090909,
163
+ "recall": 0.5,
164
+ "f1-score": 0.5416666666666666,
165
+ "support": 156.0
166
+ },
167
+ "surprise": {
168
+ "precision": 0.5113636363636364,
169
+ "recall": 0.6382978723404256,
170
+ "f1-score": 0.5678233438485805,
171
+ "support": 141.0
172
+ },
173
+ "neutral": {
174
+ "precision": 0.6694915254237288,
175
+ "recall": 0.6189143816452154,
176
+ "f1-score": 0.6432102355335854,
177
+ "support": 1787.0
178
+ },
179
+ "micro avg": {
180
+ "precision": 0.6252245777937477,
181
+ "recall": 0.5498498972981514,
182
+ "f1-score": 0.5851197982345523,
183
+ "support": 6329.0
184
+ },
185
+ "macro avg": {
186
+ "precision": 0.5584146968787934,
187
+ "recall": 0.4443044578607666,
188
+ "f1-score": 0.4783253683909286,
189
+ "support": 6329.0
190
+ },
191
+ "weighted avg": {
192
+ "precision": 0.6159179228918013,
193
+ "recall": 0.5498498972981514,
194
+ "f1-score": 0.5753626997680155,
195
+ "support": 6329.0
196
+ },
197
+ "samples avg": {
198
+ "precision": 0.5807689945335053,
199
+ "recall": 0.5735366377986609,
200
+ "f1-score": 0.5644800687918433,
201
+ "support": 6329.0
202
+ }
203
+ },
204
+ "eval_runtime": 7.6034,
205
+ "eval_samples_per_second": 713.755,
206
+ "eval_steps_per_second": 44.717,
207
+ "epoch": 6.0,
208
+ "timestamp": "2025-04-06 00:27:38"
209
+ }
metrics/train_results.json ADDED
@@ -0,0 +1,209 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "eval_loss": 0.05001503974199295,
3
+ "eval_exact_match_accuracy": 0.6630269523151348,
4
+ "eval_precision_micro": 0.8178468301512736,
5
+ "eval_recall_micro": 0.7426765551924545,
6
+ "eval_f1_micro": 0.778451219887395,
7
+ "eval_precision_macro": 0.7951070811492953,
8
+ "eval_recall_macro": 0.6451352636170852,
9
+ "eval_f1_macro": 0.6869512847504934,
10
+ "eval_classification_report": {
11
+ "admiration": {
12
+ "precision": 0.8802852776362711,
13
+ "recall": 0.8368038740920097,
14
+ "f1-score": 0.8579940417080437,
15
+ "support": 4130.0
16
+ },
17
+ "amusement": {
18
+ "precision": 0.8285163776493256,
19
+ "recall": 0.9235395189003437,
20
+ "f1-score": 0.8734511476741824,
21
+ "support": 2328.0
22
+ },
23
+ "anger": {
24
+ "precision": 0.7229129662522202,
25
+ "recall": 0.7791959157626037,
26
+ "f1-score": 0.75,
27
+ "support": 1567.0
28
+ },
29
+ "annoyance": {
30
+ "precision": 0.654065934065934,
31
+ "recall": 0.6024291497975709,
32
+ "f1-score": 0.627186512118019,
33
+ "support": 2470.0
34
+ },
35
+ "approval": {
36
+ "precision": 0.8020329138431752,
37
+ "recall": 0.5637972099353522,
38
+ "f1-score": 0.6621378621378622,
39
+ "support": 2939.0
40
+ },
41
+ "caring": {
42
+ "precision": 0.8542458808618505,
43
+ "recall": 0.6200551977920883,
44
+ "f1-score": 0.7185501066098081,
45
+ "support": 1087.0
46
+ },
47
+ "confusion": {
48
+ "precision": 0.8349397590361446,
49
+ "recall": 0.506578947368421,
50
+ "f1-score": 0.6305732484076433,
51
+ "support": 1368.0
52
+ },
53
+ "curiosity": {
54
+ "precision": 0.6951124903025602,
55
+ "recall": 0.8178913738019169,
56
+ "f1-score": 0.7515202348500734,
57
+ "support": 2191.0
58
+ },
59
+ "desire": {
60
+ "precision": 0.7435897435897436,
61
+ "recall": 0.7238689547581904,
62
+ "f1-score": 0.733596837944664,
63
+ "support": 641.0
64
+ },
65
+ "disappointment": {
66
+ "precision": 0.6796019900497512,
67
+ "recall": 0.5382190701339638,
68
+ "f1-score": 0.6007036059806509,
69
+ "support": 1269.0
70
+ },
71
+ "disapproval": {
72
+ "precision": 0.737186477644493,
73
+ "recall": 0.66864490603363,
74
+ "f1-score": 0.7012448132780082,
75
+ "support": 2022.0
76
+ },
77
+ "disgust": {
78
+ "precision": 0.780448717948718,
79
+ "recall": 0.6141235813366961,
80
+ "f1-score": 0.6873676781933663,
81
+ "support": 793.0
82
+ },
83
+ "embarrassment": {
84
+ "precision": 0.8025210084033614,
85
+ "recall": 0.6303630363036303,
86
+ "f1-score": 0.7060998151571165,
87
+ "support": 303.0
88
+ },
89
+ "excitement": {
90
+ "precision": 0.7652733118971061,
91
+ "recall": 0.5580304806565064,
92
+ "f1-score": 0.6454237288135594,
93
+ "support": 853.0
94
+ },
95
+ "fear": {
96
+ "precision": 0.7902973395931142,
97
+ "recall": 0.8473154362416108,
98
+ "f1-score": 0.8178137651821862,
99
+ "support": 596.0
100
+ },
101
+ "gratitude": {
102
+ "precision": 0.945342571208622,
103
+ "recall": 0.9226145755071374,
104
+ "f1-score": 0.9338403041825095,
105
+ "support": 2662.0
106
+ },
107
+ "grief": {
108
+ "precision": 1.0,
109
+ "recall": 0.03896103896103896,
110
+ "f1-score": 0.075,
111
+ "support": 77.0
112
+ },
113
+ "joy": {
114
+ "precision": 0.7462257368799425,
115
+ "recall": 0.7148760330578512,
116
+ "f1-score": 0.7302145620823074,
117
+ "support": 1452.0
118
+ },
119
+ "love": {
120
+ "precision": 0.8949416342412452,
121
+ "recall": 0.8820709491850431,
122
+ "f1-score": 0.8884596813133752,
123
+ "support": 2086.0
124
+ },
125
+ "nervousness": {
126
+ "precision": 0.7521367521367521,
127
+ "recall": 0.5365853658536586,
128
+ "f1-score": 0.6263345195729537,
129
+ "support": 164.0
130
+ },
131
+ "optimism": {
132
+ "precision": 0.8284815106215578,
133
+ "recall": 0.6660341555977229,
134
+ "f1-score": 0.738429172510519,
135
+ "support": 1581.0
136
+ },
137
+ "pride": {
138
+ "precision": 0.7755102040816326,
139
+ "recall": 0.34234234234234234,
140
+ "f1-score": 0.475,
141
+ "support": 111.0
142
+ },
143
+ "realization": {
144
+ "precision": 0.7507645259938838,
145
+ "recall": 0.4423423423423423,
146
+ "f1-score": 0.5566893424036281,
147
+ "support": 1110.0
148
+ },
149
+ "relief": {
150
+ "precision": 0.825,
151
+ "recall": 0.21568627450980393,
152
+ "f1-score": 0.34196891191709844,
153
+ "support": 153.0
154
+ },
155
+ "remorse": {
156
+ "precision": 0.7837837837837838,
157
+ "recall": 0.691743119266055,
158
+ "f1-score": 0.7348927875243665,
159
+ "support": 545.0
160
+ },
161
+ "sadness": {
162
+ "precision": 0.8221649484536082,
163
+ "recall": 0.7217194570135747,
164
+ "f1-score": 0.7686746987951807,
165
+ "support": 1326.0
166
+ },
167
+ "surprise": {
168
+ "precision": 0.6870748299319728,
169
+ "recall": 0.8575471698113207,
170
+ "f1-score": 0.7629039026437264,
171
+ "support": 1060.0
172
+ },
173
+ "neutral": {
174
+ "precision": 0.880541586073501,
175
+ "recall": 0.8004079049159575,
176
+ "f1-score": 0.8385646920129679,
177
+ "support": 14219.0
178
+ },
179
+ "micro avg": {
180
+ "precision": 0.8178468301512736,
181
+ "recall": 0.7426765551924545,
182
+ "f1-score": 0.778451219887395,
183
+ "support": 51103.0
184
+ },
185
+ "macro avg": {
186
+ "precision": 0.7951070811492953,
187
+ "recall": 0.6451352636170852,
188
+ "f1-score": 0.6869512847504934,
189
+ "support": 51103.0
190
+ },
191
+ "weighted avg": {
192
+ "precision": 0.819284989740342,
193
+ "recall": 0.7426765551924545,
194
+ "f1-score": 0.7736425275838884,
195
+ "support": 51103.0
196
+ },
197
+ "samples avg": {
198
+ "precision": 0.788094141134915,
199
+ "recall": 0.7679812639176841,
200
+ "f1-score": 0.7655438180801,
201
+ "support": 51103.0
202
+ }
203
+ },
204
+ "eval_runtime": 60.6603,
205
+ "eval_samples_per_second": 715.625,
206
+ "eval_steps_per_second": 44.741,
207
+ "epoch": 6.0,
208
+ "timestamp": "2025-04-06 00:27:38"
209
+ }
metrics/val_results.json ADDED
@@ -0,0 +1,209 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "eval_loss": 0.09040758013725281,
3
+ "eval_exact_match_accuracy": 0.46424622189458165,
4
+ "eval_precision_micro": 0.6286882332029862,
5
+ "eval_recall_micro": 0.5543887147335423,
6
+ "eval_f1_micro": 0.5892053973013494,
7
+ "eval_precision_macro": 0.5876411237797338,
8
+ "eval_recall_macro": 0.454800127114999,
9
+ "eval_f1_macro": 0.49235331483739386,
10
+ "eval_classification_report": {
11
+ "admiration": {
12
+ "precision": 0.7203389830508474,
13
+ "recall": 0.6967213114754098,
14
+ "f1-score": 0.7083333333333334,
15
+ "support": 488.0
16
+ },
17
+ "amusement": {
18
+ "precision": 0.7391304347826086,
19
+ "recall": 0.8415841584158416,
20
+ "f1-score": 0.7870370370370371,
21
+ "support": 303.0
22
+ },
23
+ "anger": {
24
+ "precision": 0.53125,
25
+ "recall": 0.5230769230769231,
26
+ "f1-score": 0.5271317829457365,
27
+ "support": 195.0
28
+ },
29
+ "annoyance": {
30
+ "precision": 0.416988416988417,
31
+ "recall": 0.3564356435643564,
32
+ "f1-score": 0.38434163701067614,
33
+ "support": 303.0
34
+ },
35
+ "approval": {
36
+ "precision": 0.4690265486725664,
37
+ "recall": 0.26700251889168763,
38
+ "f1-score": 0.3402889245585875,
39
+ "support": 397.0
40
+ },
41
+ "caring": {
42
+ "precision": 0.6129032258064516,
43
+ "recall": 0.37254901960784315,
44
+ "f1-score": 0.4634146341463415,
45
+ "support": 153.0
46
+ },
47
+ "confusion": {
48
+ "precision": 0.6086956521739131,
49
+ "recall": 0.3684210526315789,
50
+ "f1-score": 0.45901639344262296,
51
+ "support": 152.0
52
+ },
53
+ "curiosity": {
54
+ "precision": 0.46206896551724136,
55
+ "recall": 0.5403225806451613,
56
+ "f1-score": 0.49814126394052044,
57
+ "support": 248.0
58
+ },
59
+ "desire": {
60
+ "precision": 0.6190476190476191,
61
+ "recall": 0.5064935064935064,
62
+ "f1-score": 0.5571428571428572,
63
+ "support": 77.0
64
+ },
65
+ "disappointment": {
66
+ "precision": 0.43137254901960786,
67
+ "recall": 0.26993865030674846,
68
+ "f1-score": 0.3320754716981132,
69
+ "support": 163.0
70
+ },
71
+ "disapproval": {
72
+ "precision": 0.45064377682403434,
73
+ "recall": 0.3595890410958904,
74
+ "f1-score": 0.4,
75
+ "support": 292.0
76
+ },
77
+ "disgust": {
78
+ "precision": 0.5405405405405406,
79
+ "recall": 0.41237113402061853,
80
+ "f1-score": 0.4678362573099415,
81
+ "support": 97.0
82
+ },
83
+ "embarrassment": {
84
+ "precision": 0.6923076923076923,
85
+ "recall": 0.5142857142857142,
86
+ "f1-score": 0.5901639344262295,
87
+ "support": 35.0
88
+ },
89
+ "excitement": {
90
+ "precision": 0.42857142857142855,
91
+ "recall": 0.3125,
92
+ "f1-score": 0.3614457831325301,
93
+ "support": 96.0
94
+ },
95
+ "fear": {
96
+ "precision": 0.6857142857142857,
97
+ "recall": 0.5333333333333333,
98
+ "f1-score": 0.6,
99
+ "support": 90.0
100
+ },
101
+ "gratitude": {
102
+ "precision": 0.9300291545189504,
103
+ "recall": 0.8910614525139665,
104
+ "f1-score": 0.9101283880171184,
105
+ "support": 358.0
106
+ },
107
+ "grief": {
108
+ "precision": 0.0,
109
+ "recall": 0.0,
110
+ "f1-score": 0.0,
111
+ "support": 13.0
112
+ },
113
+ "joy": {
114
+ "precision": 0.5695364238410596,
115
+ "recall": 0.5,
116
+ "f1-score": 0.5325077399380805,
117
+ "support": 172.0
118
+ },
119
+ "love": {
120
+ "precision": 0.7674418604651163,
121
+ "recall": 0.7857142857142857,
122
+ "f1-score": 0.7764705882352941,
123
+ "support": 252.0
124
+ },
125
+ "nervousness": {
126
+ "precision": 0.4166666666666667,
127
+ "recall": 0.23809523809523808,
128
+ "f1-score": 0.30303030303030304,
129
+ "support": 21.0
130
+ },
131
+ "optimism": {
132
+ "precision": 0.6792452830188679,
133
+ "recall": 0.5167464114832536,
134
+ "f1-score": 0.5869565217391305,
135
+ "support": 209.0
136
+ },
137
+ "pride": {
138
+ "precision": 0.8333333333333334,
139
+ "recall": 0.3333333333333333,
140
+ "f1-score": 0.47619047619047616,
141
+ "support": 15.0
142
+ },
143
+ "realization": {
144
+ "precision": 0.42857142857142855,
145
+ "recall": 0.2125984251968504,
146
+ "f1-score": 0.28421052631578947,
147
+ "support": 127.0
148
+ },
149
+ "relief": {
150
+ "precision": 1.0,
151
+ "recall": 0.05555555555555555,
152
+ "f1-score": 0.10526315789473684,
153
+ "support": 18.0
154
+ },
155
+ "remorse": {
156
+ "precision": 0.7391304347826086,
157
+ "recall": 0.5,
158
+ "f1-score": 0.5964912280701754,
159
+ "support": 68.0
160
+ },
161
+ "sadness": {
162
+ "precision": 0.5314685314685315,
163
+ "recall": 0.5314685314685315,
164
+ "f1-score": 0.5314685314685315,
165
+ "support": 143.0
166
+ },
167
+ "surprise": {
168
+ "precision": 0.48044692737430167,
169
+ "recall": 0.6666666666666666,
170
+ "f1-score": 0.5584415584415584,
171
+ "support": 129.0
172
+ },
173
+ "neutral": {
174
+ "precision": 0.669481302774427,
175
+ "recall": 0.6285390713476784,
176
+ "f1-score": 0.6483644859813084,
177
+ "support": 1766.0
178
+ },
179
+ "micro avg": {
180
+ "precision": 0.6286882332029862,
181
+ "recall": 0.5543887147335423,
182
+ "f1-score": 0.5892053973013494,
183
+ "support": 6380.0
184
+ },
185
+ "macro avg": {
186
+ "precision": 0.5876411237797338,
187
+ "recall": 0.454800127114999,
188
+ "f1-score": 0.49235331483739386,
189
+ "support": 6380.0
190
+ },
191
+ "weighted avg": {
192
+ "precision": 0.6194504501264384,
193
+ "recall": 0.5543887147335423,
194
+ "f1-score": 0.5787029045585259,
195
+ "support": 6380.0
196
+ },
197
+ "samples avg": {
198
+ "precision": 0.5884629561371176,
199
+ "recall": 0.5802309866076913,
200
+ "f1-score": 0.5711635336036368,
201
+ "support": 6380.0
202
+ }
203
+ },
204
+ "eval_runtime": 7.5881,
205
+ "eval_samples_per_second": 715.066,
206
+ "eval_steps_per_second": 44.807,
207
+ "epoch": 6.0,
208
+ "timestamp": "2025-04-06 00:27:38"
209
+ }
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0e8a8b47d4990b17bd7a390d73f3056a168ca09658563431cc1c8ef1388a67f8
3
+ size 556879544
runs/Apr06_00-08-32_crest-g001/events.out.tfevents.1743916116.crest-g001.791237.0 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a5d240ad34d2f8e9c55623b81cbfab01b5bcb998c4ed55619e74f2fd316d2dbd
3
+ size 12015
runs/Apr06_00-08-32_crest-g001/events.out.tfevents.1743917243.crest-g001.791237.1 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:07656f5293e6d654b0fb0a220a2fac1c8ec259ed59d2ee93b40df1f7206e8fb7
3
+ size 2095
special_tokens_map.json ADDED
@@ -0,0 +1,51 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "bos_token": {
3
+ "content": "[CLS]",
4
+ "lstrip": false,
5
+ "normalized": false,
6
+ "rstrip": false,
7
+ "single_word": false
8
+ },
9
+ "cls_token": {
10
+ "content": "[CLS]",
11
+ "lstrip": false,
12
+ "normalized": false,
13
+ "rstrip": false,
14
+ "single_word": false
15
+ },
16
+ "eos_token": {
17
+ "content": "[SEP]",
18
+ "lstrip": false,
19
+ "normalized": false,
20
+ "rstrip": false,
21
+ "single_word": false
22
+ },
23
+ "mask_token": {
24
+ "content": "[MASK]",
25
+ "lstrip": true,
26
+ "normalized": true,
27
+ "rstrip": false,
28
+ "single_word": false
29
+ },
30
+ "pad_token": {
31
+ "content": "[PAD]",
32
+ "lstrip": false,
33
+ "normalized": false,
34
+ "rstrip": false,
35
+ "single_word": false
36
+ },
37
+ "sep_token": {
38
+ "content": "[SEP]",
39
+ "lstrip": false,
40
+ "normalized": false,
41
+ "rstrip": false,
42
+ "single_word": false
43
+ },
44
+ "unk_token": {
45
+ "content": "[UNK]",
46
+ "lstrip": false,
47
+ "normalized": false,
48
+ "rstrip": false,
49
+ "single_word": false
50
+ }
51
+ }
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
@@ -0,0 +1,60 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "add_bos_token": false,
3
+ "add_prefix_space": false,
4
+ "added_tokens_decoder": {
5
+ "0": {
6
+ "content": "[PAD]",
7
+ "lstrip": false,
8
+ "normalized": false,
9
+ "rstrip": false,
10
+ "single_word": false,
11
+ "special": true
12
+ },
13
+ "1": {
14
+ "content": "[CLS]",
15
+ "lstrip": false,
16
+ "normalized": false,
17
+ "rstrip": false,
18
+ "single_word": false,
19
+ "special": true
20
+ },
21
+ "2": {
22
+ "content": "[SEP]",
23
+ "lstrip": false,
24
+ "normalized": false,
25
+ "rstrip": false,
26
+ "single_word": false,
27
+ "special": true
28
+ },
29
+ "3": {
30
+ "content": "[UNK]",
31
+ "lstrip": false,
32
+ "normalized": false,
33
+ "rstrip": false,
34
+ "single_word": false,
35
+ "special": true
36
+ },
37
+ "50264": {
38
+ "content": "[MASK]",
39
+ "lstrip": true,
40
+ "normalized": true,
41
+ "rstrip": false,
42
+ "single_word": false,
43
+ "special": true
44
+ }
45
+ },
46
+ "bos_token": "[CLS]",
47
+ "clean_up_tokenization_spaces": false,
48
+ "cls_token": "[CLS]",
49
+ "do_lower_case": false,
50
+ "eos_token": "[SEP]",
51
+ "errors": "replace",
52
+ "extra_special_tokens": {},
53
+ "mask_token": "[MASK]",
54
+ "model_max_length": 1000000000000000019884624838656,
55
+ "pad_token": "[PAD]",
56
+ "sep_token": "[SEP]",
57
+ "tokenizer_class": "DebertaTokenizer",
58
+ "unk_token": "[UNK]",
59
+ "vocab_type": "gpt2"
60
+ }
training_args.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5b64e2e3ee8f097a410b8123c3100887a4c94da1f8abf657b78cacea0188b9c1
3
+ size 5368
vocab.json ADDED
The diff for this file is too large to render. See raw diff