| | --- |
| | library_name: transformers |
| | base_model: pilotj/roberta-base-pretrained-v1 |
| | tags: |
| | - generated_from_trainer |
| | metrics: |
| | - accuracy |
| | - precision |
| | - recall |
| | model-index: |
| | - name: roberta-base-v1 |
| | results: [] |
| | --- |
| | |
| | <!-- This model card has been generated automatically according to the information the Trainer had access to. You |
| | should probably proofread and complete it, then remove this comment. --> |
| |
|
| | # roberta-base-v1 |
| |
|
| | This model is a fine-tuned version of [pilotj/roberta-base-pretrained-v1](https://huggingface.co/pilotj/roberta-base-pretrained-v1) on the None dataset. |
| | It achieves the following results on the evaluation set: |
| | - Loss: 0.3920 |
| | - Accuracy: 0.8867 |
| | - F1 Macro: 0.8576 |
| | - F1 W: 0.8880 |
| | - Precision: 0.8909 |
| | - Recall: 0.8867 |
| |
|
| | ## Model description |
| |
|
| | More information needed |
| |
|
| | ## Intended uses & limitations |
| |
|
| | More information needed |
| |
|
| | ## Training and evaluation data |
| |
|
| | More information needed |
| |
|
| | ## Training procedure |
| |
|
| | ### Training hyperparameters |
| |
|
| | The following hyperparameters were used during training: |
| | - learning_rate: 2e-05 |
| | - train_batch_size: 128 |
| | - eval_batch_size: 64 |
| | - seed: 42 |
| | - gradient_accumulation_steps: 2 |
| | - total_train_batch_size: 256 |
| | - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 |
| | - lr_scheduler_type: linear |
| | - num_epochs: 2 |
| | - mixed_precision_training: Native AMP |
| | |
| | ### Training results |
| | |
| | | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 Macro | F1 W | Precision | Recall | |
| | |:-------------:|:------:|:----:|:---------------:|:--------:|:--------:|:------:|:---------:|:------:| |
| | | 0.3932 | 0.1896 | 500 | 0.4138 | 0.8803 | 0.8505 | 0.8816 | 0.8847 | 0.8803 | |
| | | 0.3997 | 0.3792 | 1000 | 0.4097 | 0.8809 | 0.8499 | 0.8824 | 0.8861 | 0.8809 | |
| | | 0.3997 | 0.5688 | 1500 | 0.4126 | 0.8818 | 0.8514 | 0.8834 | 0.8874 | 0.8818 | |
| | | 0.3907 | 0.7584 | 2000 | 0.3988 | 0.8844 | 0.8544 | 0.8856 | 0.8887 | 0.8844 | |
| | | 0.3881 | 0.9480 | 2500 | 0.3956 | 0.8862 | 0.8549 | 0.8871 | 0.8901 | 0.8862 | |
| | | 0.3558 | 1.1377 | 3000 | 0.3971 | 0.8863 | 0.8570 | 0.8874 | 0.8902 | 0.8863 | |
| | | 0.3526 | 1.3273 | 3500 | 0.3999 | 0.8852 | 0.8558 | 0.8867 | 0.8902 | 0.8852 | |
| | | 0.3435 | 1.5169 | 4000 | 0.3991 | 0.8858 | 0.8565 | 0.8870 | 0.8903 | 0.8858 | |
| | | 0.3428 | 1.7065 | 4500 | 0.3929 | 0.8859 | 0.8572 | 0.8871 | 0.8901 | 0.8859 | |
| | | 0.3392 | 1.8961 | 5000 | 0.3920 | 0.8867 | 0.8576 | 0.8880 | 0.8909 | 0.8867 | |
| | |
| | |
| | ### Framework versions |
| | |
| | - Transformers 4.45.1 |
| | - Pytorch 2.2.1+cu121 |
| | - Datasets 3.0.1 |
| | - Tokenizers 0.20.0 |
| | |