CollectedDataModel / README.md
Mohsen21's picture
End of training
a143c24 verified
|
raw
history blame
3.53 kB
metadata
library_name: transformers
license: mit
base_model: microsoft/speecht5_tts
tags:
  - generated_from_trainer
model-index:
  - name: CollectedDataModel
    results: []

CollectedDataModel

This model is a fine-tuned version of microsoft/speecht5_tts on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.4390

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0001
  • train_batch_size: 4
  • eval_batch_size: 2
  • seed: 42
  • gradient_accumulation_steps: 8
  • total_train_batch_size: 32
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 100
  • training_steps: 4000
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss
0.5567 0.9913 100 0.4913
0.5127 1.9827 200 0.4692
0.4915 2.9740 300 0.4562
0.4862 3.9653 400 0.4524
0.4745 4.9566 500 0.4483
0.4735 5.9480 600 0.4458
0.4681 6.9393 700 0.4397
0.4656 7.9306 800 0.4408
0.4576 8.9219 900 0.4336
0.4571 9.9133 1000 0.4343
0.451 10.9046 1100 0.4339
0.4517 11.8959 1200 0.4316
0.4432 12.8872 1300 0.4315
0.4448 13.8786 1400 0.4357
0.4455 14.8699 1500 0.4296
0.4387 15.8612 1600 0.4331
0.4334 16.8525 1700 0.4359
0.4373 17.8439 1800 0.4290
0.4304 18.8352 1900 0.4318
0.4279 19.8265 2000 0.4305
0.4294 20.8178 2100 0.4327
0.4269 21.8092 2200 0.4327
0.4248 22.8005 2300 0.4309
0.4255 23.7918 2400 0.4275
0.43 24.7831 2500 0.4315
0.4214 25.7745 2600 0.4345
0.4166 26.7658 2700 0.4362
0.4173 27.7571 2800 0.4343
0.4172 28.7485 2900 0.4325
0.4142 29.7398 3000 0.4329
0.4134 30.7311 3100 0.4327
0.4121 31.7224 3200 0.4388
0.4085 32.7138 3300 0.4352
0.4095 33.7051 3400 0.4388
0.4112 34.6964 3500 0.4372
0.4106 35.6877 3600 0.4388
0.4054 36.6791 3700 0.4392
0.4075 37.6704 3800 0.4395
0.4086 38.6617 3900 0.4393
0.4125 39.6530 4000 0.4390

Framework versions

  • Transformers 4.44.2
  • Pytorch 2.4.1+cu121
  • Datasets 3.0.1
  • Tokenizers 0.19.1