--- library_name: transformers license: apache-2.0 base_model: google/mt5-base tags: - simplification - generated_from_trainer metrics: - bleu model-index: - name: Mt5-neutralization-es results: [] datasets: - somosnlp-hackathon-2022/neutral-es language: - es pipeline_tag: text2text-generation --- # Mt5-neutralization-es This model is a fine-tuned version of [google/mt5-base](https://huggingface.co/google/mt5-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.1368 - Bleu: 81.3122 - Gen Len: 17.4896 ## Intended uses & limitations Translating Spanish sentences and texts into neutral,"inclusive" language Los alumnos: el alumnado Las enfermeras: el personal sanitario ## Training and evaluation data Training and evaluation dataset: Spanish Gender Neutralization dataset ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5.6e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:| | No log | 1.0 | 440 | 0.1853 | 81.0788 | 18.3125 | | 1.9771 | 2.0 | 880 | 0.1368 | 81.3122 | 17.4896 | ### Framework versions - Transformers 4.50.1 - Pytorch 2.6.0+cu124 - Datasets 3.4.1 - Tokenizers 0.21.1