--- library_name: transformers license: mit base_model: xlm-roberta-base tags: - generated_from_trainer metrics: - accuracy model-index: - name: output_fp16 results: [] --- # output_fp16 This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.4230 - Accuracy: 0.8382 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 15 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:------:|:---------------:|:--------:| | 0.5485 | 1.0 | 12272 | 0.4865 | 0.8100 | | 0.4989 | 2.0 | 24544 | 0.4720 | 0.8193 | | 0.4743 | 3.0 | 36816 | 0.5417 | 0.7859 | | 0.4762 | 4.0 | 49088 | 0.4359 | 0.8313 | | 0.4525 | 5.0 | 61360 | 0.4297 | 0.8365 | | 0.4457 | 6.0 | 73632 | 0.4273 | 0.8398 | | 0.4205 | 7.0 | 85904 | 0.4343 | 0.8321 | | 0.4315 | 8.0 | 98176 | 0.4287 | 0.8357 | | 0.4271 | 9.0 | 110448 | 0.4299 | 0.8394 | | 0.4031 | 10.0 | 122720 | 0.4250 | 0.8353 | | 0.4 | 11.0 | 134992 | 0.4401 | 0.8345 | | 0.3899 | 12.0 | 147264 | 0.4178 | 0.8418 | | 0.3921 | 13.0 | 159536 | 0.4313 | 0.8386 | | 0.3849 | 14.0 | 171808 | 0.4212 | 0.8378 | | 0.3777 | 15.0 | 184080 | 0.4230 | 0.8382 | ### Framework versions - Transformers 4.53.2 - Pytorch 2.6.0+cu124 - Datasets 2.18.0 - Tokenizers 0.21.2