--- library_name: transformers license: mit base_model: FacebookAI/roberta-base tags: - generated_from_trainer model-index: - name: bold-cod-455 results: [] --- # bold-cod-455 This model is a fine-tuned version of [FacebookAI/roberta-base](https://huggingface.co/FacebookAI/roberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1686 - Hamming Loss: 0.0605 - Zero One Loss: 0.38 - Jaccard Score: 0.3247 - Hamming Loss Optimised: 0.0579 - Hamming Loss Threshold: 0.5913 - Zero One Loss Optimised: 0.3862 - Zero One Loss Threshold: 0.4581 - Jaccard Score Optimised: 0.3111 - Jaccard Score Threshold: 0.3022 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2.6795250522175907e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 2024 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 9 ### Training results | Training Loss | Epoch | Step | Validation Loss | Hamming Loss | Zero One Loss | Jaccard Score | Hamming Loss Optimised | Hamming Loss Threshold | Zero One Loss Optimised | Zero One Loss Threshold | Jaccard Score Optimised | Jaccard Score Threshold | |:-------------:|:-----:|:----:|:---------------:|:------------:|:-------------:|:-------------:|:----------------------:|:----------------------:|:-----------------------:|:-----------------------:|:-----------------------:|:-----------------------:| | No log | 1.0 | 100 | 0.2399 | 0.0751 | 0.6375 | 0.6196 | 0.0736 | 0.4031 | 0.5413 | 0.2884 | 0.4770 | 0.2690 | | No log | 2.0 | 200 | 0.1861 | 0.062 | 0.4600 | 0.4166 | 0.0617 | 0.6009 | 0.4487 | 0.4640 | 0.3375 | 0.2916 | | No log | 3.0 | 300 | 0.1692 | 0.0583 | 0.4525 | 0.4103 | 0.0579 | 0.5425 | 0.4087 | 0.4147 | 0.3241 | 0.2491 | | No log | 4.0 | 400 | 0.1648 | 0.0589 | 0.4237 | 0.3791 | 0.0576 | 0.5207 | 0.4 | 0.4601 | 0.3181 | 0.2985 | | 0.2003 | 5.0 | 500 | 0.1648 | 0.0594 | 0.4087 | 0.3603 | 0.0574 | 0.5612 | 0.4113 | 0.4029 | 0.3139 | 0.3039 | | 0.2003 | 6.0 | 600 | 0.1707 | 0.0617 | 0.4025 | 0.3389 | 0.0587 | 0.6338 | 0.3988 | 0.5041 | 0.3148 | 0.2846 | | 0.2003 | 7.0 | 700 | 0.1701 | 0.0606 | 0.3888 | 0.3359 | 0.0586 | 0.6001 | 0.39 | 0.4468 | 0.3147 | 0.2914 | | 0.2003 | 8.0 | 800 | 0.1690 | 0.0614 | 0.385 | 0.3303 | 0.0584 | 0.6970 | 0.3838 | 0.5334 | 0.3155 | 0.2859 | | 0.2003 | 9.0 | 900 | 0.1686 | 0.0605 | 0.38 | 0.3247 | 0.0579 | 0.5913 | 0.3862 | 0.4581 | 0.3111 | 0.3022 | ### Framework versions - Transformers 4.47.0 - Pytorch 2.5.1+cu124 - Datasets 3.1.0 - Tokenizers 0.21.0