dnrti_our
This model is a fine-tuned version of roberta-base on an unknown dataset. It achieves the following results on the evaluation set:
- Loss: 0.2893
- Precision: 0.5617
- Recall: 0.5754
- F1: 0.5685
- Accuracy: 0.9214
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10.0
Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|---|---|---|---|---|---|---|---|
| 0.6248 | 0.59 | 500 | 0.3242 | 0.5148 | 0.5422 | 0.5281 | 0.9182 |
| 0.3048 | 1.19 | 1000 | 0.2893 | 0.5617 | 0.5754 | 0.5685 | 0.9214 |
| 0.2449 | 1.78 | 1500 | 0.3179 | 0.5095 | 0.6171 | 0.5582 | 0.9148 |
| 0.2088 | 2.37 | 2000 | 0.3358 | 0.5238 | 0.6368 | 0.5748 | 0.9099 |
| 0.1788 | 2.97 | 2500 | 0.3198 | 0.5496 | 0.6802 | 0.6080 | 0.9181 |
| 0.1433 | 3.56 | 3000 | 0.3423 | 0.5565 | 0.6491 | 0.5992 | 0.9179 |
| 0.1381 | 4.15 | 3500 | 0.3747 | 0.5633 | 0.6225 | 0.5914 | 0.9168 |
| 0.1161 | 4.74 | 4000 | 0.4113 | 0.5169 | 0.6542 | 0.5775 | 0.9093 |
| 0.1002 | 5.34 | 4500 | 0.3938 | 0.5487 | 0.6431 | 0.5921 | 0.9150 |
| 0.0954 | 5.93 | 5000 | 0.3862 | 0.5612 | 0.6482 | 0.6016 | 0.9192 |
| 0.0762 | 6.52 | 5500 | 0.4267 | 0.5576 | 0.6416 | 0.5967 | 0.9169 |
| 0.0741 | 7.12 | 6000 | 0.4455 | 0.5693 | 0.6434 | 0.6041 | 0.9184 |
| 0.064 | 7.71 | 6500 | 0.4512 | 0.5672 | 0.6368 | 0.6000 | 0.9177 |
| 0.0567 | 8.3 | 7000 | 0.4559 | 0.5682 | 0.6269 | 0.5962 | 0.9188 |
| 0.0504 | 8.9 | 7500 | 0.4841 | 0.5553 | 0.6422 | 0.5956 | 0.9150 |
| 0.0465 | 9.49 | 8000 | 0.4834 | 0.5606 | 0.6380 | 0.5968 | 0.9169 |
Framework versions
- Transformers 4.36.0.dev0
- Pytorch 2.1.0+cu118
- Datasets 2.14.6
- Tokenizers 0.14.1
- Downloads last month
- 3
Model tree for Cyber-ThreaD/RoBERTa-APTNER
Base model
FacebookAI/roberta-base