File size: 1,929 Bytes
bf68d26 62297cf 7dd9865 c841dc6 6d532dc 7dd9865 4cdcc54 bf68d26 b3780bd 7dd9865 b3780bd 7dd9865 5b96a52 7dd9865 c841dc6 62297cf 7dd9865 62297cf 7dd9865 62297cf 7dd9865 cb65227 62297cf 5b96a52 cb65227 62297cf 7dd9865 62297cf 4cdcc54 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 |
---
library_name: transformers
license: apache-2.0
base_model: google-bert/bert-base-multilingual-uncased
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: bert-multilingual-sdg-classification
results: []
datasets:
- albertmartinez/OSDG
pipeline_tag: text-classification
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-multilingual-sdg-classification
This model is a fine-tuned version of [google-bert/bert-base-multilingual-uncased](https://huggingface.co/google-bert/bert-base-multilingual-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7207
- F1: 0.7925
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 600
- num_epochs: 5.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 2.1122 | 1.0 | 538 | 1.0625 | 0.6814 |
| 0.9564 | 2.0 | 1076 | 0.8073 | 0.7686 |
| 0.7652 | 3.0 | 1614 | 0.7433 | 0.7886 |
| 0.6619 | 4.0 | 2152 | 0.7261 | 0.7919 |
| 0.6038 | 5.0 | 2690 | 0.7207 | 0.7925 |
### Framework versions
- Transformers 4.49.0.dev0
- Pytorch 2.1.2.post304
- Datasets 3.2.0
- Tokenizers 0.21.0 |