results
This model is a fine-tuned version of google/siglip2-large-patch16-256 on an unknown dataset. It achieves the following results on the evaluation set:
- Loss: 7.3621
- Model Preparation Time: 0.0027
- Age Mae: 3.9585
- Gender Acc: 0.9706
- Eth Acc: 0.8793
- Age Group Acc: 0.6400
- Gender Age Group Acc: 0.6197
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- distributed_type: tpu
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 20
Training results
| Training Loss | Epoch | Step | Validation Loss | Model Preparation Time | Age Mae | Gender Acc | Eth Acc | Age Group Acc | Gender Age Group Acc |
|---|---|---|---|---|---|---|---|---|---|
| No log | 1.0 | 170 | 8.1110 | 0.0027 | 5.2484 | 0.9552 | 0.8125 | 0.5699 | 0.5562 |
| No log | 2.0 | 340 | 6.7194 | 0.0027 | 4.2664 | 0.9672 | 0.8486 | 0.6114 | 0.5915 |
| 8.9853 | 3.0 | 510 | 7.1145 | 0.0027 | 4.6598 | 0.9664 | 0.8627 | 0.6027 | 0.5881 |
| 8.9853 | 4.0 | 680 | 6.8165 | 0.0027 | 4.4121 | 0.9697 | 0.8590 | 0.6101 | 0.5844 |
| 8.9853 | 5.0 | 850 | 6.4416 | 0.0027 | 4.1299 | 0.9685 | 0.8714 | 0.6267 | 0.6006 |
| 6.2831 | 6.0 | 1020 | 6.3622 | 0.0027 | 4.1352 | 0.9689 | 0.8764 | 0.6491 | 0.6242 |
| 6.2831 | 7.0 | 1190 | 6.7311 | 0.0027 | 4.4327 | 0.9706 | 0.8735 | 0.6217 | 0.6043 |
| 6.2831 | 8.0 | 1360 | 6.3400 | 0.0027 | 4.1008 | 0.9722 | 0.8727 | 0.6474 | 0.6296 |
| 4.9201 | 9.0 | 1530 | 6.4687 | 0.0027 | 4.1312 | 0.9693 | 0.8814 | 0.6371 | 0.6226 |
| 4.9201 | 10.0 | 1700 | 6.4563 | 0.0027 | 4.0639 | 0.9701 | 0.8756 | 0.6479 | 0.6197 |
| 4.9201 | 11.0 | 1870 | 6.6413 | 0.0027 | 4.1177 | 0.9726 | 0.8706 | 0.6255 | 0.6114 |
| 3.7246 | 12.0 | 2040 | 6.7000 | 0.0027 | 4.0861 | 0.9722 | 0.8851 | 0.6404 | 0.6134 |
| 3.7246 | 13.0 | 2210 | 7.0551 | 0.0027 | 4.2035 | 0.9714 | 0.8781 | 0.6251 | 0.6089 |
| 3.7246 | 14.0 | 2380 | 6.9348 | 0.0027 | 4.0477 | 0.9706 | 0.8752 | 0.6354 | 0.6118 |
| 2.6813 | 15.0 | 2550 | 7.0902 | 0.0027 | 4.0125 | 0.9718 | 0.8756 | 0.6454 | 0.6259 |
| 2.6813 | 16.0 | 2720 | 7.2237 | 0.0027 | 4.0217 | 0.9685 | 0.8789 | 0.6354 | 0.6031 |
| 2.6813 | 17.0 | 2890 | 7.2564 | 0.0027 | 4.0108 | 0.9697 | 0.8859 | 0.6383 | 0.6139 |
| 1.921 | 18.0 | 3060 | 7.3150 | 0.0027 | 4.0026 | 0.9701 | 0.8797 | 0.6412 | 0.6172 |
| 1.921 | 19.0 | 3230 | 7.3417 | 0.0027 | 3.9617 | 0.9697 | 0.8801 | 0.6379 | 0.6163 |
| 1.921 | 20.0 | 3400 | 7.3621 | 0.0027 | 3.9585 | 0.9706 | 0.8793 | 0.6400 | 0.6197 |
Framework versions
- Transformers 4.57.3
- Pytorch 2.9.0+cpu
- Datasets 4.4.2
- Tokenizers 0.22.1
- Downloads last month
- 4
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support
Model tree for ljnlonoljpiljm/results
Base model
google/siglip2-large-patch16-256