Configuration Parsing
Warning:
In adapter_config.json: "peft.task_type" must be a string
SmolVLM-Base-ocr-isl-with-isl-backbone
This model is a fine-tuned version of HuggingFaceTB/SmolVLM-Base on an unknown dataset. It achieves the following results on the evaluation set:
- Loss: 0.0147
- Wer: 0.2907
- Cer: 0.5314
- Exact Match: 0.0
- Special Char Acc: 1.0
- Seq Acc 5: 0.0
- Seq Acc 10: 0.0
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.PAGED_ADAMW_8BIT with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 500
- num_epochs: 1
Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer | Exact Match | Special Char Acc | Seq Acc 5 | Seq Acc 10 |
|---|---|---|---|---|---|---|---|---|---|
| 0.3666 | 0.0325 | 125 | 0.2243 | 0.8704 | 0.8629 | 0.0 | 0.9622 | 0.0 | 0.0 |
| 0.168 | 0.0649 | 250 | 0.1485 | 0.7556 | 0.9306 | 0.0 | 0.9680 | 0.0 | 0.0 |
| 0.1282 | 0.0974 | 375 | 0.1187 | 0.4481 | 0.6394 | 0.0 | 0.9797 | 0.0 | 0.0 |
| 0.0984 | 0.1299 | 500 | 0.0965 | 0.5056 | 0.7014 | 0.0 | 0.9826 | 0.0 | 0.0 |
| 0.0891 | 0.1624 | 625 | 0.0755 | 0.4611 | 0.6485 | 0.0 | 0.9913 | 0.0 | 0.0 |
| 0.0744 | 0.1948 | 750 | 0.0638 | 0.4963 | 0.7116 | 0.0 | 0.9913 | 0.0 | 0.0 |
| 0.0708 | 0.2273 | 875 | 0.0518 | 0.3944 | 0.5805 | 0.0 | 0.9942 | 0.0 | 0.0 |
| 0.0647 | 0.2598 | 1000 | 0.0611 | 0.5389 | 0.8122 | 0.0 | 0.9855 | 0.0 | 0.0 |
| 0.0572 | 0.2922 | 1125 | 0.0454 | 0.4796 | 0.7158 | 0.0 | 0.9913 | 0.0 | 0.0 |
| 0.0555 | 0.3247 | 1250 | 0.0320 | 0.5685 | 0.7432 | 0.0 | 0.9884 | 0.0 | 0.0 |
| 0.0445 | 0.3572 | 1375 | 0.0386 | 0.4611 | 0.6404 | 0.0 | 0.9971 | 0.0 | 0.0 |
| 0.0455 | 0.3897 | 1500 | 0.0392 | 0.4259 | 0.6783 | 0.0 | 0.9913 | 0.0 | 0.0 |
| 0.0469 | 0.4221 | 1625 | 0.0319 | 0.2944 | 0.6415 | 0.0 | 0.9971 | 0.0 | 0.0 |
| 0.0386 | 0.4546 | 1750 | 0.0305 | 0.3574 | 0.5656 | 0.0 | 0.9971 | 0.0 | 0.0 |
| 0.0393 | 0.4871 | 1875 | 0.0327 | 0.2889 | 0.5405 | 0.0 | 0.9971 | 0.0 | 0.0 |
| 0.0364 | 0.5195 | 2000 | 0.0254 | 0.2685 | 0.4360 | 0.0 | 0.9971 | 0.0 | 0.0 |
| 0.0338 | 0.5520 | 2125 | 0.0254 | 0.2556 | 0.4616 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0332 | 0.5845 | 2250 | 0.0217 | 0.3111 | 0.5105 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0285 | 0.6170 | 2375 | 0.0257 | 0.3167 | 0.4898 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0291 | 0.6494 | 2500 | 0.0230 | 0.4481 | 0.6054 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.028 | 0.6819 | 2625 | 0.0204 | 0.3741 | 0.5687 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.024 | 0.7144 | 2750 | 0.0204 | 0.3352 | 0.5242 | 0.0 | 0.9971 | 0.0 | 0.0 |
| 0.0262 | 0.7469 | 2875 | 0.0170 | 0.3796 | 0.6020 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0237 | 0.7793 | 3000 | 0.0155 | 0.4222 | 0.6574 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0246 | 0.8118 | 3125 | 0.0164 | 0.2796 | 0.5151 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.023 | 0.8443 | 3250 | 0.0152 | 0.2815 | 0.5054 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0225 | 0.8767 | 3375 | 0.0150 | 0.2722 | 0.5151 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0216 | 0.9092 | 3500 | 0.0153 | 0.2815 | 0.5268 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0213 | 0.9417 | 3625 | 0.0149 | 0.2852 | 0.5300 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0204 | 0.9742 | 3750 | 0.0147 | 0.2907 | 0.5314 | 0.0 | 1.0 | 0.0 | 0.0 |
Framework versions
- PEFT 0.18.0
- Transformers 4.57.3
- Pytorch 2.9.1+cu128
- Datasets 4.4.1
- Tokenizers 0.22.1
- Downloads last month
- 112
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support
Model tree for Sigurdur/SmolVLM-Base-ocr-isl-with-isl-backbone
Base model
HuggingFaceTB/SmolLM2-1.7B
Quantized
HuggingFaceTB/SmolLM2-1.7B-Instruct
Finetuned
HuggingFaceTB/SmolVLM-Base