hin-hindi-tss-speecht5

This model is a fine-tuned version of microsoft/speecht5_tts on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.0341

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0001
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 3407
  • gradient_accumulation_steps: 4
  • total_train_batch_size: 32
  • optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_steps: 4000
  • training_steps: 40000
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss
0.0633 1.9970 1000 0.0472
0.0537 3.9930 2000 0.0419
0.0491 5.9890 3000 0.0392
0.0474 7.9850 4000 0.0393
0.0449 9.9810 5000 0.0385
0.0458 11.9770 6000 0.0385
0.0435 13.9730 7000 0.0370
0.043 15.9690 8000 0.0388
0.0409 17.9650 9000 0.0369
0.0396 19.9610 10000 0.0360
0.0405 21.9570 11000 0.0360
0.0383 23.9530 12000 0.0353
0.0401 25.9491 13000 0.0354
0.0383 27.9451 14000 0.0353
0.0372 29.9411 15000 0.0354
0.0373 31.9371 16000 0.0350
0.0371 33.9331 17000 0.0343
0.0381 35.9291 18000 0.0354
0.0361 37.9251 19000 0.0351
0.0362 39.9211 20000 0.0353
0.0347 41.9171 21000 0.0346
0.0346 43.9131 22000 0.0344
0.0347 45.9091 23000 0.0348
0.0342 47.9051 24000 0.0342
0.0342 49.9011 25000 0.0340
0.0333 51.8971 26000 0.0343
0.0343 53.8931 27000 0.0342
0.0332 55.8891 28000 0.0341
0.0325 57.8851 29000 0.0341
0.0324 59.8811 30000 0.0339
0.0319 61.8771 31000 0.0341
0.0336 63.8731 32000 0.0343
0.0328 65.8691 33000 0.0339
0.0353 67.8651 34000 0.0344
0.032 69.8611 35000 0.0343
0.0315 71.8571 36000 0.0341
0.0323 73.8531 37000 0.0341
0.0316 75.8492 38000 0.0342
0.0318 77.8452 39000 0.0342
0.032 79.8412 40000 0.0341

Framework versions

  • Transformers 4.57.1
  • Pytorch 2.8.0+cu128
  • Datasets 4.2.0
  • Tokenizers 0.22.2
Downloads last month
18
Safetensors
Model size
0.1B params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for sil-ai/hin-hindi-tss-speecht5

Finetuned
(1376)
this model