You need to agree to share your contact information to access this model

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this model content.

TTS_tigregna

This model is a fine-tuned version of microsoft/speecht5_tts on the tigregna_20_hr dataset. It achieves the following results on the evaluation set:

  • Loss: 0.3608

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • gradient_accumulation_steps: 8
  • total_train_batch_size: 64
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 500
  • training_steps: 30000

Training results

Training Loss Epoch Step Validation Loss
0.4347 11.33 1000 0.3954
0.4138 22.66 2000 0.3801
0.4055 33.99 3000 0.3721
0.3997 45.33 4000 0.3683
0.3941 56.66 5000 0.3643
0.3879 67.99 6000 0.3631
0.3826 79.32 7000 0.3619
0.3846 90.65 8000 0.3607
0.3779 101.98 9000 0.3599
0.3756 113.31 10000 0.3603
0.3758 124.65 11000 0.3596
0.3729 135.98 12000 0.3586
0.3742 147.31 13000 0.3610
0.3714 158.64 14000 0.3583
0.3712 169.97 15000 0.3601
0.3689 181.3 16000 0.3608
0.3706 192.63 17000 0.3607
0.3676 203.97 18000 0.3594
0.367 215.3 19000 0.3595
0.3627 226.63 20000 0.3593
0.3623 237.96 21000 0.3601
0.3641 249.29 22000 0.3599
0.365 260.62 23000 0.3604
0.3621 271.95 24000 0.3607
0.3644 283.29 25000 0.3603
0.3678 294.62 26000 0.3607
0.3642 305.95 27000 0.3610
0.3624 317.28 28000 0.3615
0.3621 328.61 29000 0.3615
0.3615 339.94 30000 0.3608

Framework versions

  • Transformers 4.38.1
  • Pytorch 2.1.0+cu121
  • Datasets 2.18.0
  • Tokenizers 0.15.2
Downloads last month
287
Safetensors
Model size
0.1B params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for RahelM/tigregana_tts2

Finetuned
(1364)
this model