--- library_name: transformers license: mit base_model: intfloat/multilingual-e5-large-instruct tags: - generated_from_trainer model-index: - name: e5_Dechets_MultiLabel_07082025 results: [] --- # e5_Dechets_MultiLabel_07082025 This model is a fine-tuned version of [intfloat/multilingual-e5-large-instruct](https://huggingface.co/intfloat/multilingual-e5-large-instruct) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.2103 - F1 Weighted: 0.9299 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-06 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 15 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Weighted | |:-------------:|:-----:|:----:|:---------------:|:-----------:| | 0.9037 | 1.0 | 180 | 0.5881 | 0.7452 | | 0.5128 | 2.0 | 360 | 0.3669 | 0.8503 | | 0.3466 | 3.0 | 540 | 0.3095 | 0.8792 | | 0.2719 | 4.0 | 720 | 0.2777 | 0.8974 | | 0.2186 | 5.0 | 900 | 0.2298 | 0.9143 | | 0.1839 | 6.0 | 1080 | 0.2198 | 0.9188 | | 0.1587 | 7.0 | 1260 | 0.2159 | 0.9219 | | 0.1343 | 8.0 | 1440 | 0.2041 | 0.9278 | | 0.117 | 9.0 | 1620 | 0.2101 | 0.9261 | | 0.1037 | 10.0 | 1800 | 0.1985 | 0.9273 | | 0.095 | 11.0 | 1980 | 0.2103 | 0.9299 | ### Framework versions - Transformers 4.54.1 - Pytorch 2.6.0+cu124 - Datasets 4.0.0 - Tokenizers 0.21.4