dnrti_our
This model is a fine-tuned version of roberta-base on an unknown dataset. It achieves the following results on the evaluation set:
- Loss: 0.2414
- Precision: 0.7221
- Recall: 0.7683
- F1: 0.7445
- Accuracy: 0.9283
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10.0
Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|---|---|---|---|---|---|---|---|
| 0.5955 | 0.76 | 500 | 0.3862 | 0.5271 | 0.6278 | 0.5731 | 0.8741 |
| 0.3197 | 1.52 | 1000 | 0.3042 | 0.6336 | 0.6674 | 0.6501 | 0.9003 |
| 0.2565 | 2.28 | 1500 | 0.2859 | 0.6474 | 0.7315 | 0.6869 | 0.9095 |
| 0.2067 | 3.04 | 2000 | 0.2631 | 0.6955 | 0.7605 | 0.7265 | 0.9218 |
| 0.1657 | 3.81 | 2500 | 0.2414 | 0.7221 | 0.7683 | 0.7445 | 0.9283 |
| 0.1311 | 4.57 | 3000 | 0.2424 | 0.7239 | 0.7812 | 0.7514 | 0.9307 |
| 0.1178 | 5.33 | 3500 | 0.2639 | 0.7366 | 0.7830 | 0.7591 | 0.9333 |
| 0.099 | 6.09 | 4000 | 0.2692 | 0.7321 | 0.8070 | 0.7677 | 0.9328 |
| 0.0838 | 6.85 | 4500 | 0.2505 | 0.7663 | 0.7913 | 0.7786 | 0.9376 |
| 0.0728 | 7.61 | 5000 | 0.2731 | 0.7392 | 0.8093 | 0.7726 | 0.9341 |
| 0.0654 | 8.37 | 5500 | 0.2725 | 0.7601 | 0.8056 | 0.7822 | 0.9370 |
| 0.0589 | 9.13 | 6000 | 0.2770 | 0.7588 | 0.8158 | 0.7862 | 0.9386 |
| 0.0536 | 9.89 | 6500 | 0.2766 | 0.7610 | 0.8171 | 0.7881 | 0.9390 |
Framework versions
- Transformers 4.36.0.dev0
- Pytorch 2.1.0+cu118
- Datasets 2.14.6
- Tokenizers 0.14.1
- Downloads last month
- 2
Model tree for Cyber-ThreaD/RoBERTa-DNRTI
Base model
FacebookAI/roberta-base