08acd42c9c53ac06ff7f33a659310bdd
This model is a fine-tuned version of google-bert/bert-base-multilingual-uncased on the google/boolq dataset. It achieves the following results on the evaluation set:
- Loss: 0.9119
- Data Size: 1.0
- Epoch Runtime: 16.5808
- Accuracy: 0.6930
- F1 Macro: 0.6845
- Rouge1: 0.6927
- Rouge2: 0.0
- Rougel: 0.6927
- Rougelsum: 0.6930
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- total_train_batch_size: 32
- total_eval_batch_size: 32
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: constant
- num_epochs: 50
Training results
| Training Loss | Epoch | Step | Validation Loss | Data Size | Epoch Runtime | Accuracy | F1 Macro | Rouge1 | Rouge2 | Rougel | Rougelsum |
|---|---|---|---|---|---|---|---|---|---|---|---|
| No log | 0 | 0 | 0.6657 | 0 | 1.9636 | 0.6204 | 0.3836 | 0.6204 | 0.0 | 0.6201 | 0.6201 |
| No log | 1 | 294 | 0.7019 | 0.0078 | 3.4390 | 0.3857 | 0.2943 | 0.3854 | 0.0 | 0.3860 | 0.3857 |
| No log | 2 | 588 | 0.6640 | 0.0156 | 2.4479 | 0.6213 | 0.3832 | 0.6213 | 0.0 | 0.6207 | 0.6210 |
| No log | 3 | 882 | 0.6618 | 0.0312 | 2.7828 | 0.6213 | 0.3832 | 0.6213 | 0.0 | 0.6207 | 0.6210 |
| 0.0271 | 4 | 1176 | 0.6616 | 0.0625 | 3.3105 | 0.6213 | 0.3832 | 0.6213 | 0.0 | 0.6207 | 0.6210 |
| 0.0547 | 5 | 1470 | 0.6571 | 0.125 | 4.3462 | 0.6213 | 0.3832 | 0.6213 | 0.0 | 0.6207 | 0.6210 |
| 0.0939 | 6 | 1764 | 0.6567 | 0.25 | 6.2748 | 0.6213 | 0.3832 | 0.6213 | 0.0 | 0.6207 | 0.6210 |
| 0.6276 | 7 | 2058 | 0.6270 | 0.5 | 10.1090 | 0.6581 | 0.5966 | 0.6581 | 0.0 | 0.6575 | 0.6578 |
| 0.5635 | 8.0 | 2352 | 0.5923 | 1.0 | 16.8610 | 0.6872 | 0.6281 | 0.6869 | 0.0 | 0.6869 | 0.6875 |
| 0.4754 | 9.0 | 2646 | 0.6054 | 1.0 | 16.5489 | 0.7034 | 0.6752 | 0.7037 | 0.0 | 0.7031 | 0.7039 |
| 0.3568 | 10.0 | 2940 | 0.7645 | 1.0 | 17.9144 | 0.6967 | 0.6835 | 0.6967 | 0.0 | 0.6964 | 0.6973 |
| 0.2521 | 11.0 | 3234 | 0.8326 | 1.0 | 16.9828 | 0.7203 | 0.6841 | 0.7203 | 0.0 | 0.7200 | 0.7200 |
| 0.2001 | 12.0 | 3528 | 0.9119 | 1.0 | 16.5808 | 0.6930 | 0.6845 | 0.6927 | 0.0 | 0.6927 | 0.6930 |
Framework versions
- Transformers 4.57.0
- Pytorch 2.8.0+cu128
- Datasets 4.3.0
- Tokenizers 0.22.1
- Downloads last month
- -
Model tree for contemmcm/08acd42c9c53ac06ff7f33a659310bdd
Base model
google-bert/bert-base-multilingual-uncased