e5_Eau_v3 / README.md
Ludo33's picture
End of training
4acb016 verified
metadata
library_name: transformers
license: mit
base_model: intfloat/multilingual-e5-large-instruct
tags:
  - generated_from_trainer
metrics:
  - accuracy
  - f1
model-index:
  - name: e5_Eau_v3
    results: []

e5_Eau_v3

This model is a fine-tuned version of intfloat/multilingual-e5-large-instruct on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.0705
  • Accuracy: 0.9745
  • F1: 0.9749

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 16
  • eval_batch_size: 16
  • seed: 42
  • gradient_accumulation_steps: 4
  • total_train_batch_size: 64
  • optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: linear
  • num_epochs: 100
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Accuracy F1
0.5263 0.9942 86 0.1482 0.9487 0.9493
0.1935 1.9942 172 0.1506 0.9495 0.9499
0.1538 2.9942 258 0.0885 0.9692 0.9694
0.1279 3.9942 344 0.0759 0.9739 0.9737
0.1082 4.9942 430 0.0570 0.9772 0.9772
0.1231 5.9942 516 0.0522 0.9777 0.9778
0.093 6.9942 602 0.0546 0.9770 0.9771
0.0709 7.9942 688 0.0705 0.9745 0.9749

Framework versions

  • Transformers 4.51.3
  • Pytorch 2.6.0+cu124
  • Datasets 3.5.0
  • Tokenizers 0.21.1