swinv2-tiny-patch4-window8-256-dmae-humeda-DAV49

This model is a fine-tuned version of microsoft/swinv2-tiny-patch4-window8-256 on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 0.7811
  • Accuracy: 0.7386

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1.5e-05
  • train_batch_size: 32
  • eval_batch_size: 32
  • seed: 42
  • gradient_accumulation_steps: 4
  • total_train_batch_size: 128
  • optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: cosine_with_restarts
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 40
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Accuracy
No log 0.9412 8 1.5166 0.4318
1.5823 1.9412 16 1.4043 0.4432
1.5029 2.9412 24 1.3230 0.5
1.5029 3.9412 32 1.2373 0.5795
1.3569 4.9412 40 1.0701 0.6023
1.1064 5.9412 48 0.9832 0.6023
1.1064 6.9412 56 0.9004 0.6705
0.941 7.9412 64 0.8323 0.6591
0.7975 8.9412 72 0.7830 0.6818
0.7975 9.9412 80 0.7657 0.7045
0.7242 10.9412 88 0.7484 0.7386
0.6308 11.9412 96 0.7143 0.7386
0.6308 12.9412 104 0.6923 0.7273
0.5782 13.9412 112 0.6776 0.7386
0.5333 14.9412 120 0.6889 0.7614
0.5333 15.9412 128 0.6799 0.7841
0.495 16.9412 136 0.6794 0.7614
0.4931 17.9412 144 0.6921 0.7614
0.4931 18.9412 152 0.7162 0.7273
0.435 19.9412 160 0.7128 0.7386
0.4109 20.9412 168 0.7157 0.75
0.4109 21.9412 176 0.7404 0.7386
0.3897 22.9412 184 0.7275 0.7386
0.3718 23.9412 192 0.7492 0.7727
0.3718 24.9412 200 0.7520 0.7386
0.3866 25.9412 208 0.7550 0.7273
0.366 26.9412 216 0.7395 0.7386
0.366 27.9412 224 0.7340 0.7386
0.3454 28.9412 232 0.7578 0.7273
0.346 29.9412 240 0.7679 0.7273
0.346 30.9412 248 0.7546 0.75
0.3325 31.9412 256 0.7600 0.75
0.3117 32.9412 264 0.7798 0.7386
0.3117 33.9412 272 0.7944 0.7273
0.3177 34.9412 280 0.7856 0.7386
0.3263 35.9412 288 0.7813 0.7386
0.3263 36.9412 296 0.7798 0.7386
0.3305 37.9412 304 0.7804 0.7386
0.2999 38.9412 312 0.7810 0.7386
0.2999 39.9412 320 0.7811 0.7386

Framework versions

  • Transformers 4.48.2
  • Pytorch 2.5.1+cu124
  • Datasets 3.2.0
  • Tokenizers 0.21.0
Downloads last month
1
Safetensors
Model size
27.6M params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for RobertoSonic/swinv2-tiny-patch4-window8-256-dmae-humeda-DAV49

Finetuned
(138)
this model