30aa6304118bcbb5b00adcc6a36673af

This model is a fine-tuned version of google-t5/t5-3b on the Helsinki-NLP/opus_books [de-fr] dataset. It achieves the following results on the evaluation set:

  • Loss: 1.3122
  • Data Size: 1.0
  • Epoch Runtime: 397.1098
  • Bleu: 10.3923

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • distributed_type: multi-GPU
  • num_devices: 4
  • total_train_batch_size: 32
  • total_eval_batch_size: 32
  • optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: constant
  • num_epochs: 50

Training results

Training Loss Epoch Step Validation Loss Data Size Epoch Runtime Bleu
No log 0 0 2.0334 0 25.0784 4.9579
No log 1 872 1.7639 0.0078 29.5733 6.7638
No log 2 1744 1.6814 0.0156 38.5627 7.8419
0.0295 3 2616 1.6170 0.0312 49.5605 7.2458
0.1116 4 3488 1.5600 0.0625 66.5870 7.7329
1.6824 5 4360 1.4948 0.125 81.7043 8.0213
1.5638 6 5232 1.4233 0.25 138.3493 8.8669
1.4028 7 6104 1.3389 0.5 228.3327 9.5633
1.2819 8.0 6976 1.2618 1.0 420.3590 10.2029
1.1336 9.0 7848 1.2301 1.0 416.9507 10.5613
1.0311 10.0 8720 1.2166 1.0 416.8892 10.4853
0.9222 11.0 9592 1.2308 1.0 411.2438 10.4009
0.811 12.0 10464 1.2448 1.0 413.9808 10.6733
0.7448 13.0 11336 1.2777 1.0 415.1636 10.4867
0.6513 14.0 12208 1.3122 1.0 397.1098 10.3923

Framework versions

  • Transformers 4.57.0
  • Pytorch 2.8.0+cu128
  • Datasets 4.2.0
  • Tokenizers 0.22.1
Downloads last month
1
Safetensors
Model size
0.7B params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for contemmcm/30aa6304118bcbb5b00adcc6a36673af

Base model

google-t5/t5-3b
Finetuned
(68)
this model