ssc-qxp-mms-model-mix-adapt-max3
This model is a fine-tuned version of facebook/mms-1b-all on the None dataset. It achieves the following results on the evaluation set:
- Loss: 2.4935
- Cer: 0.8858
- Wer: 1.0
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 8
- eval_batch_size: 6
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 10
- mixed_precision_training: Native AMP
Training results
| Training Loss | Epoch | Step | Validation Loss | Cer | Wer |
|---|---|---|---|---|---|
| 4.7441 | 0.6700 | 200 | 4.5692 | 0.8499 | 1.0 |
| 3.1454 | 1.3384 | 400 | 2.8762 | 0.8834 | 1.0 |
| 2.8759 | 2.0067 | 600 | 2.7991 | 0.8939 | 1.0 |
| 2.7927 | 2.6767 | 800 | 2.7170 | 0.8714 | 1.0 |
| 2.6427 | 3.3451 | 1000 | 2.5878 | 0.8401 | 1.0046 |
| 2.6826 | 4.0134 | 1200 | 2.6477 | 0.9101 | 1.0 |
| 2.6617 | 4.6834 | 1400 | 2.5153 | 0.8783 | 1.0 |
| 2.5807 | 5.3518 | 1600 | 2.4798 | 0.8144 | 1.0055 |
| 2.5083 | 6.0201 | 1800 | 2.4124 | 0.8721 | 1.0 |
| 2.5154 | 6.6901 | 2000 | 2.3597 | 0.8803 | 1.0 |
| 2.7958 | 7.3585 | 2200 | 2.7983 | 0.9183 | 1.0 |
| 2.7276 | 8.0268 | 2400 | 2.5911 | 0.9095 | 1.0 |
| 2.649 | 8.6968 | 2600 | 2.5195 | 0.8957 | 1.0 |
| 2.5892 | 9.3652 | 2800 | 2.4935 | 0.8858 | 1.0 |
Framework versions
- Transformers 4.52.1
- Pytorch 2.9.1+cu128
- Datasets 3.6.0
- Tokenizers 0.21.4
- Downloads last month
- -
Model tree for ctaguchi/ssc-qxp-mms-model-mix-adapt-max3
Base model
facebook/mms-1b-all