iitb_en_indic_robust_punctuation_model
This model is a fine-tuned version of ai4bharat/indictrans2-en-indic-dist-200M on an unknown dataset. It achieves the following results on the evaluation set:
- Loss: 0.2679
- Bleu: 12.6319
- Chrfpp: 35.424
- Comet: 0.549
- Bleurt: None
- Gen Len: 20.8721
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 8
Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Chrfpp | Comet | Bleurt | Gen Len |
|---|---|---|---|---|---|---|---|---|
| 0.4295 | 0.2530 | 6000 | 0.4148 | 9.3672 | 31.5376 | 0.531 | None | 20.8761 |
| 0.4049 | 0.5059 | 12000 | 0.3887 | 9.7036 | 32.0364 | 0.5341 | None | 20.8717 |
| 0.408 | 0.7589 | 18000 | 0.3717 | 10.0257 | 32.395 | 0.5361 | None | 20.8778 |
| 0.3392 | 1.0119 | 24000 | 0.3609 | 10.2988 | 32.7193 | 0.5377 | None | 20.8761 |
| 0.3328 | 1.2649 | 30000 | 0.3508 | 10.4288 | 32.9579 | 0.5393 | None | 20.8767 |
| 0.3216 | 1.5178 | 36000 | 0.3408 | 10.7134 | 33.2408 | 0.5413 | None | 20.8727 |
| 0.3236 | 1.7708 | 42000 | 0.3333 | 10.7915 | 33.3923 | 0.5412 | None | 20.8762 |
| 0.2862 | 2.0238 | 48000 | 0.3286 | 10.8948 | 33.499 | 0.5414 | None | 20.8747 |
| 0.293 | 2.2768 | 54000 | 0.3217 | 11.0627 | 33.6756 | 0.5426 | None | 20.8708 |
| 0.2839 | 2.5297 | 60000 | 0.3168 | 11.1573 | 33.8408 | 0.5432 | None | 20.8743 |
| 0.2758 | 2.7827 | 66000 | 0.3109 | 11.3293 | 33.9424 | 0.5433 | None | 20.8745 |
| 0.2667 | 3.0357 | 72000 | 0.3075 | 11.3372 | 34.0924 | 0.5441 | None | 20.8726 |
| 0.2618 | 3.2886 | 78000 | 0.3040 | 11.5382 | 34.275 | 0.5452 | None | 20.871 |
| 0.2657 | 3.5416 | 84000 | 0.2997 | 11.6458 | 34.3695 | 0.5451 | None | 20.8744 |
| 0.2625 | 3.7946 | 90000 | 0.2961 | 11.7085 | 34.4786 | 0.5461 | None | 20.8709 |
| 0.2483 | 4.0476 | 96000 | 0.2931 | 11.8339 | 34.5484 | 0.5461 | None | 20.8726 |
| 0.2348 | 4.3005 | 102000 | 0.2897 | 11.8966 | 34.6524 | 0.5467 | None | 20.8713 |
| 0.2473 | 4.5535 | 108000 | 0.2870 | 11.9908 | 34.7337 | 0.5469 | None | 20.8717 |
| 0.2569 | 4.8065 | 114000 | 0.2829 | 12.0335 | 34.7971 | 0.5466 | None | 20.8748 |
| 0.2164 | 5.0594 | 120000 | 0.2822 | 12.1516 | 34.9439 | 0.5476 | None | 20.8711 |
| 0.2297 | 5.3124 | 126000 | 0.2802 | 12.2298 | 35.0421 | 0.548 | None | 20.8724 |
| 0.2261 | 5.5654 | 132000 | 0.2784 | 12.2563 | 35.0484 | 0.5481 | None | 20.8713 |
| 0.224 | 5.8184 | 138000 | 0.2749 | 12.3494 | 35.1261 | 0.5478 | None | 20.8722 |
| 0.2108 | 6.0713 | 144000 | 0.2750 | 12.3841 | 35.1321 | 0.5478 | None | 20.8729 |
| 0.2259 | 6.3243 | 150000 | 0.2726 | 12.4555 | 35.2263 | 0.5485 | None | 20.871 |
| 0.2172 | 6.5773 | 156000 | 0.2712 | 12.5084 | 35.2688 | 0.5484 | None | 20.8734 |
| 0.2139 | 6.8303 | 162000 | 0.2706 | 12.5631 | 35.3595 | 0.5489 | None | 20.8732 |
| 0.1997 | 7.0832 | 168000 | 0.2699 | 12.5653 | 35.367 | 0.5489 | None | 20.8736 |
| 0.2075 | 7.3362 | 174000 | 0.2690 | 12.5846 | 35.3799 | 0.5491 | None | 20.8711 |
| 0.199 | 7.5892 | 180000 | 0.2686 | 12.6322 | 35.4432 | 0.5493 | None | 20.8724 |
| 0.204 | 7.8421 | 186000 | 0.2679 | 12.6319 | 35.424 | 0.549 | None | 20.8721 |
Framework versions
- Transformers 4.53.2
- Pytorch 2.4.0a0+f70bd71a48.nv24.06
- Datasets 2.21.0
- Tokenizers 0.21.4
- Downloads last month
- 10
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support
Model tree for thenlpresearcher/iitb_en_indic_robust_punctuation_model
Base model
ai4bharat/indictrans2-en-indic-dist-200M