File size: 2,674 Bytes
a238dbd b25c0d3 a238dbd c97ee2c a238dbd b25c0d3 a238dbd | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 | ---
license: mit
base_model: roberta-base
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: irony_pt_Brazil
results: []
---
# irony_pt_Brazil
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on part of the MultiPICo dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0033
- Accuracy: 0.6463
- Precision: 0.44
- Recall: 0.5739
- F1: 0.4981
## Model description
The model is trained considering the annotation of annotators from Brazil only, on instances in Portuguese (PT and BZ linguistic varieties). The annotations from these annotators are aggregated using majority voting and then used to train the model.
## Training and evaluation data
The model has been trained on the annotation from annotators from Brazil from the MultiPICo dataset (instances in Portuguese). The data has been randomly split into a train and a validation set.
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 16
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
| 0.0043 | 1.0 | 71 | 0.0042 | 0.5745 | 0.3448 | 0.4348 | 0.3846 |
| 0.0043 | 2.0 | 142 | 0.0041 | 0.5266 | 0.3562 | 0.6783 | 0.4671 |
| 0.0039 | 3.0 | 213 | 0.0039 | 0.5266 | 0.3562 | 0.6783 | 0.4671 |
| 0.004 | 4.0 | 284 | 0.0038 | 0.6170 | 0.4110 | 0.5826 | 0.4820 |
| 0.0036 | 5.0 | 355 | 0.0035 | 0.6516 | 0.4452 | 0.5652 | 0.4981 |
| 0.0035 | 6.0 | 426 | 0.0036 | 0.4973 | 0.3630 | 0.8522 | 0.5091 |
| 0.0031 | 7.0 | 497 | 0.0033 | 0.5904 | 0.4156 | 0.8348 | 0.5549 |
| 0.0027 | 8.0 | 568 | 0.0033 | 0.6543 | 0.4460 | 0.5391 | 0.4882 |
| 0.0027 | 9.0 | 639 | 0.0031 | 0.6144 | 0.4257 | 0.7478 | 0.5426 |
| 0.0023 | 10.0 | 710 | 0.0029 | 0.6303 | 0.4388 | 0.7478 | 0.5531 |
| 0.0021 | 11.0 | 781 | 0.0031 | 0.6383 | 0.4348 | 0.6087 | 0.5072 |
| 0.0018 | 12.0 | 852 | 0.0033 | 0.6463 | 0.44 | 0.5739 | 0.4981 |
### Framework versions
- Transformers 4.34.1
- Pytorch 2.0.1+cu117
- Datasets 2.14.5
- Tokenizers 0.14.1
|