YagiASAFAS commited on
Commit
9fe3b67
·
verified ·
1 Parent(s): dc075f5

MalaysiaPoliBERT Push

Browse files
Files changed (1) hide show
  1. README.md +92 -3
README.md CHANGED
@@ -1,3 +1,92 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: transformers
3
+ license: apache-2.0
4
+ base_model: bert-base-uncased
5
+ tags:
6
+ - generated_from_trainer
7
+ model-index:
8
+ - name: MalaysiaPoliBERT
9
+ results: []
10
+ ---
11
+
12
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
13
+ should probably proofread and complete it, then remove this comment. -->
14
+
15
+ # MalaysiaPoliBERT
16
+
17
+ This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
18
+ It achieves the following results on the evaluation set:
19
+ - Loss: 0.1928
20
+ - Democracy F1: 0.9556
21
+ - Democracy Accuracy: 0.9574
22
+ - Economy F1: 0.9352
23
+ - Economy Accuracy: 0.9381
24
+ - Race F1: 0.9569
25
+ - Race Accuracy: 0.9580
26
+ - Leadership F1: 0.8411
27
+ - Leadership Accuracy: 0.8457
28
+ - Development F1: 0.9222
29
+ - Development Accuracy: 0.9269
30
+ - Corruption F1: 0.9611
31
+ - Corruption Accuracy: 0.9627
32
+ - Instability F1: 0.9462
33
+ - Instability Accuracy: 0.9492
34
+ - Safety F1: 0.9213
35
+ - Safety Accuracy: 0.9258
36
+ - Administration F1: 0.9367
37
+ - Administration Accuracy: 0.9412
38
+ - Education F1: 0.9661
39
+ - Education Accuracy: 0.9678
40
+ - Religion F1: 0.9590
41
+ - Religion Accuracy: 0.9598
42
+ - Environment F1: 0.9808
43
+ - Environment Accuracy: 0.9821
44
+ - Overall F1: 0.9402
45
+ - Overall Accuracy: 0.9429
46
+
47
+ ## Model description
48
+
49
+ More information needed
50
+
51
+ ## Intended uses & limitations
52
+
53
+ More information needed
54
+
55
+ ## Training and evaluation data
56
+
57
+ More information needed
58
+
59
+ ## Training procedure
60
+
61
+ ### Training hyperparameters
62
+
63
+ The following hyperparameters were used during training:
64
+ - learning_rate: 5e-05
65
+ - train_batch_size: 16
66
+ - eval_batch_size: 16
67
+ - seed: 42
68
+ - gradient_accumulation_steps: 4
69
+ - total_train_batch_size: 64
70
+ - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
71
+ - lr_scheduler_type: linear
72
+ - lr_scheduler_warmup_steps: 500
73
+ - num_epochs: 5
74
+ - mixed_precision_training: Native AMP
75
+
76
+ ### Training results
77
+
78
+ | Training Loss | Epoch | Step | Validation Loss | Democracy F1 | Democracy Accuracy | Economy F1 | Economy Accuracy | Race F1 | Race Accuracy | Leadership F1 | Leadership Accuracy | Development F1 | Development Accuracy | Corruption F1 | Corruption Accuracy | Instability F1 | Instability Accuracy | Safety F1 | Safety Accuracy | Administration F1 | Administration Accuracy | Education F1 | Education Accuracy | Religion F1 | Religion Accuracy | Environment F1 | Environment Accuracy | Overall F1 | Overall Accuracy |
79
+ |:-------------:|:------:|:----:|:---------------:|:------------:|:------------------:|:----------:|:----------------:|:-------:|:-------------:|:-------------:|:-------------------:|:--------------:|:--------------------:|:-------------:|:-------------------:|:--------------:|:--------------------:|:---------:|:---------------:|:-----------------:|:-----------------------:|:------------:|:------------------:|:-----------:|:-----------------:|:--------------:|:--------------------:|:----------:|:----------------:|
80
+ | 0.2762 | 1.0 | 600 | 0.2618 | 0.9216 | 0.9410 | 0.8961 | 0.9121 | 0.9179 | 0.9339 | 0.7244 | 0.7770 | 0.8460 | 0.8856 | 0.9274 | 0.9416 | 0.8918 | 0.9236 | 0.8792 | 0.8998 | 0.8800 | 0.9163 | 0.9518 | 0.9588 | 0.9355 | 0.9454 | 0.9718 | 0.9757 | 0.8953 | 0.9176 |
81
+ | 0.2 | 2.0 | 1200 | 0.2052 | 0.9428 | 0.9518 | 0.9226 | 0.9292 | 0.9507 | 0.9542 | 0.7889 | 0.8134 | 0.8957 | 0.9128 | 0.9551 | 0.9587 | 0.9396 | 0.9465 | 0.9130 | 0.9185 | 0.9296 | 0.9375 | 0.9648 | 0.9664 | 0.9558 | 0.9577 | 0.9799 | 0.9817 | 0.9282 | 0.9357 |
82
+ | 0.1426 | 3.0 | 1800 | 0.1916 | 0.9538 | 0.9574 | 0.9318 | 0.9351 | 0.9564 | 0.9582 | 0.8296 | 0.8378 | 0.9163 | 0.9235 | 0.9586 | 0.9591 | 0.9468 | 0.9484 | 0.9200 | 0.9230 | 0.9331 | 0.9393 | 0.9648 | 0.9673 | 0.9582 | 0.9589 | 0.9826 | 0.9838 | 0.9377 | 0.9410 |
83
+ | 0.103 | 4.0 | 2400 | 0.1908 | 0.9548 | 0.9579 | 0.9348 | 0.9364 | 0.9570 | 0.9582 | 0.8368 | 0.8416 | 0.9214 | 0.9261 | 0.9615 | 0.9627 | 0.9460 | 0.9491 | 0.9209 | 0.9253 | 0.9370 | 0.9418 | 0.9675 | 0.9690 | 0.9602 | 0.9607 | 0.9809 | 0.9820 | 0.9399 | 0.9426 |
84
+ | 0.0838 | 4.9921 | 2995 | 0.1928 | 0.9556 | 0.9574 | 0.9352 | 0.9381 | 0.9569 | 0.9580 | 0.8411 | 0.8457 | 0.9222 | 0.9269 | 0.9611 | 0.9627 | 0.9462 | 0.9492 | 0.9213 | 0.9258 | 0.9367 | 0.9412 | 0.9661 | 0.9678 | 0.9590 | 0.9598 | 0.9808 | 0.9821 | 0.9402 | 0.9429 |
85
+
86
+
87
+ ### Framework versions
88
+
89
+ - Transformers 4.50.3
90
+ - Pytorch 2.6.0+cu124
91
+ - Datasets 3.5.0
92
+ - Tokenizers 0.21.1