File size: 7,119 Bytes
688e38f
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
---
library_name: transformers
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: PhilippinesPoliBERT
  results: []
---

<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->

# PhilippinesPoliBERT

This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2115
- Regionalism F1: 0.9786
- Regionalism Accuracy: 0.9775
- Clientelism F1: 0.9601
- Clientelism Accuracy: 0.961
- Economic Policy F1: 0.9521
- Economic Policy Accuracy: 0.952
- Security F1: 0.9602
- Security Accuracy: 0.962
- Discipline Among Poor F1: 0.9767
- Discipline Among Poor Accuracy: 0.9775
- Populism F1: 0.9020
- Populism Accuracy: 0.9015
- Marcos Duterte Alliance F1: 0.9447
- Marcos Duterte Alliance Accuracy: 0.9485
- Uniteam Positive Campaign F1: 0.8936
- Uniteam Positive Campaign Accuracy: 0.894
- Overall F1: 0.9460
- Overall Accuracy: 0.9467

## Model description

More information needed

## Intended uses & limitations

More information needed

## Training and evaluation data

More information needed

## Training procedure

### Training hyperparameters

The following hyperparameters were used during training:
- learning_rate: 7e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 16
- mixed_precision_training: Native AMP

### Training results

| Training Loss | Epoch | Step | Validation Loss | Regionalism F1 | Regionalism Accuracy | Clientelism F1 | Clientelism Accuracy | Economic Policy F1 | Economic Policy Accuracy | Security F1 | Security Accuracy | Discipline Among Poor F1 | Discipline Among Poor Accuracy | Populism F1 | Populism Accuracy | Marcos Duterte Alliance F1 | Marcos Duterte Alliance Accuracy | Uniteam Positive Campaign F1 | Uniteam Positive Campaign Accuracy | Overall F1 | Overall Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------------:|:--------------------:|:--------------:|:--------------------:|:------------------:|:------------------------:|:-----------:|:-----------------:|:------------------------:|:------------------------------:|:-----------:|:-----------------:|:--------------------------:|:--------------------------------:|:----------------------------:|:----------------------------------:|:----------:|:----------------:|
| 0.6703        | 1.0   | 250  | 0.4869          | 0.9487         | 0.9635               | 0.8803         | 0.883                | 0.8468             | 0.856                    | 0.8181      | 0.8405            | 0.9647                   | 0.9695                         | 0.5545      | 0.6345            | 0.8339                     | 0.839                            | 0.6386                       | 0.701                              | 0.8107     | 0.8359           |
| 0.2993        | 2.0   | 500  | 0.2892          | 0.9746         | 0.977                | 0.9420         | 0.9465               | 0.9424             | 0.9435                   | 0.9245      | 0.9335            | 0.9713                   | 0.975                          | 0.7850      | 0.792             | 0.8923                     | 0.905                            | 0.8420                       | 0.854                              | 0.9092     | 0.9158           |
| 0.2011        | 3.0   | 750  | 0.2276          | 0.9692         | 0.9705               | 0.9513         | 0.9535               | 0.9488             | 0.949                    | 0.9504      | 0.9535            | 0.9743                   | 0.976                          | 0.8702      | 0.8705            | 0.9290                     | 0.9355                           | 0.8991                       | 0.9                                | 0.9366     | 0.9386           |
| 0.143         | 4.0   | 1000 | 0.2217          | 0.9803         | 0.9805               | 0.9568         | 0.9575               | 0.9496             | 0.9495                   | 0.9555      | 0.9575            | 0.9722                   | 0.9725                         | 0.8834      | 0.884             | 0.9308                     | 0.9365                           | 0.8906                       | 0.8905                             | 0.9399     | 0.9411           |
| 0.1029        | 5.0   | 1250 | 0.2258          | 0.9781         | 0.9785               | 0.9579         | 0.9595               | 0.9505             | 0.9515                   | 0.9483      | 0.952             | 0.9769                   | 0.978                          | 0.8945      | 0.894             | 0.9355                     | 0.9415                           | 0.8867                       | 0.8885                             | 0.9410     | 0.9429           |
| 0.0865        | 6.0   | 1500 | 0.2201          | 0.9795         | 0.98                 | 0.9477         | 0.9475               | 0.9468             | 0.9455                   | 0.9559      | 0.958             | 0.9776                   | 0.978                          | 0.9078      | 0.9075            | 0.9255                     | 0.9305                           | 0.8936                       | 0.892                              | 0.9418     | 0.9424           |
| 0.0796        | 7.0   | 1750 | 0.2157          | 0.9771         | 0.976                | 0.9605         | 0.961                | 0.9579             | 0.958                    | 0.9559      | 0.9575            | 0.9715                   | 0.9745                         | 0.9116      | 0.9115            | 0.9422                     | 0.9465                           | 0.8929                       | 0.8935                             | 0.9462     | 0.9473           |
| 0.0702        | 8.0   | 2000 | 0.2149          | 0.9797         | 0.9795               | 0.9559         | 0.9565               | 0.9429             | 0.9405                   | 0.9545      | 0.9565            | 0.9717                   | 0.9735                         | 0.8992      | 0.8985            | 0.9403                     | 0.9445                           | 0.9023                       | 0.903                              | 0.9433     | 0.9441           |
| 0.0685        | 9.0   | 2250 | 0.2115          | 0.9786         | 0.9775               | 0.9601         | 0.961                | 0.9521             | 0.952                    | 0.9602      | 0.962             | 0.9767                   | 0.9775                         | 0.9020      | 0.9015            | 0.9447                     | 0.9485                           | 0.8936                       | 0.894                              | 0.9460     | 0.9467           |


### Framework versions

- Transformers 4.52.4
- Pytorch 2.6.0+cu124
- Datasets 2.14.4
- Tokenizers 0.21.1