beit-base-patch16-224-65-fold5

This model is a fine-tuned version of microsoft/beit-base-patch16-224 on the imagefolder dataset. It achieves the following results on the evaluation set:

  • Loss: 0.4937
  • Accuracy: 0.9014

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 32
  • eval_batch_size: 32
  • seed: 42
  • gradient_accumulation_steps: 4
  • total_train_batch_size: 128
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 100

Training results

Training Loss Epoch Step Validation Loss Accuracy
No log 0.9231 3 0.7742 0.4507
No log 1.8462 6 0.7185 0.4930
No log 2.7692 9 0.6625 0.5634
0.7338 4.0 13 0.6136 0.7183
0.7338 4.9231 16 0.5974 0.6479
0.7338 5.8462 19 0.5771 0.6338
0.6191 6.7692 22 0.5400 0.7042
0.6191 8.0 26 0.5127 0.7183
0.6191 8.9231 29 0.5341 0.7324
0.5723 9.8462 32 0.4877 0.7887
0.5723 10.7692 35 0.6659 0.6197
0.5723 12.0 39 0.5790 0.6761
0.5161 12.9231 42 0.5001 0.7606
0.5161 13.8462 45 0.4195 0.8310
0.5161 14.7692 48 0.4806 0.7746
0.4982 16.0 52 0.4013 0.8028
0.4982 16.9231 55 0.4189 0.8028
0.4982 17.8462 58 0.4018 0.8310
0.438 18.7692 61 0.5230 0.7183
0.438 20.0 65 0.4768 0.7465
0.438 20.9231 68 0.4428 0.7887
0.4641 21.8462 71 0.4122 0.8169
0.4641 22.7692 74 0.4537 0.7746
0.4641 24.0 78 0.3838 0.8310
0.308 24.9231 81 0.4586 0.8028
0.308 25.8462 84 0.5623 0.8028
0.308 26.7692 87 0.4050 0.8310
0.2766 28.0 91 0.3860 0.8169
0.2766 28.9231 94 0.4062 0.8169
0.2766 29.8462 97 0.6191 0.8169
0.288 30.7692 100 0.6076 0.7746
0.288 32.0 104 0.5300 0.8169
0.288 32.9231 107 0.6178 0.7606
0.2676 33.8462 110 0.4465 0.8451
0.2676 34.7692 113 0.5893 0.7606
0.2676 36.0 117 0.4782 0.8169
0.2306 36.9231 120 0.4946 0.8310
0.2306 37.8462 123 0.4534 0.8451
0.2306 38.7692 126 0.4603 0.8451
0.2095 40.0 130 0.5839 0.8028
0.2095 40.9231 133 0.4536 0.8310
0.2095 41.8462 136 0.4617 0.8592
0.2095 42.7692 139 0.4531 0.8592
0.2171 44.0 143 0.4325 0.8732
0.2171 44.9231 146 0.4732 0.8592
0.2171 45.8462 149 0.4779 0.8592
0.1686 46.7692 152 0.4841 0.8451
0.1686 48.0 156 0.5690 0.8310
0.1686 48.9231 159 0.5477 0.8451
0.1644 49.8462 162 0.5844 0.8310
0.1644 50.7692 165 0.5818 0.8310
0.1644 52.0 169 0.4674 0.8451
0.1915 52.9231 172 0.5320 0.8732
0.1915 53.8462 175 0.4933 0.8451
0.1915 54.7692 178 0.5090 0.8592
0.1561 56.0 182 0.4864 0.8451
0.1561 56.9231 185 0.4652 0.8732
0.1561 57.8462 188 0.5113 0.8592
0.1298 58.7692 191 0.4803 0.8732
0.1298 60.0 195 0.4794 0.8451
0.1298 60.9231 198 0.4743 0.8451
0.1467 61.8462 201 0.4739 0.8592
0.1467 62.7692 204 0.5211 0.8451
0.1467 64.0 208 0.5315 0.8592
0.1363 64.9231 211 0.5182 0.8592
0.1363 65.8462 214 0.5160 0.8451
0.1363 66.7692 217 0.6170 0.8169
0.154 68.0 221 0.4857 0.8592
0.154 68.9231 224 0.4763 0.8592
0.154 69.8462 227 0.4937 0.9014
0.141 70.7692 230 0.5038 0.8873
0.141 72.0 234 0.5026 0.8592
0.141 72.9231 237 0.5019 0.8592
0.1166 73.8462 240 0.5028 0.8592
0.1166 74.7692 243 0.5226 0.8592
0.1166 76.0 247 0.5295 0.8732
0.117 76.9231 250 0.5073 0.8732
0.117 77.8462 253 0.5081 0.8732
0.117 78.7692 256 0.5036 0.8592
0.1037 80.0 260 0.5038 0.8451
0.1037 80.9231 263 0.5072 0.8451
0.1037 81.8462 266 0.5081 0.8451
0.1037 82.7692 269 0.5062 0.8310
0.1085 84.0 273 0.5144 0.8451
0.1085 84.9231 276 0.5208 0.8592
0.1085 85.8462 279 0.5248 0.8592
0.0939 86.7692 282 0.5301 0.8592
0.0939 88.0 286 0.5357 0.8451
0.0939 88.9231 289 0.5398 0.8451
0.0962 89.8462 292 0.5434 0.8451
0.0962 90.7692 295 0.5455 0.8451
0.0962 92.0 299 0.5448 0.8451
0.1131 92.3077 300 0.5446 0.8451

Framework versions

  • Transformers 4.40.2
  • Pytorch 2.2.1+cu121
  • Datasets 2.19.1
  • Tokenizers 0.19.1
Downloads last month
2
Safetensors
Model size
85.8M params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for BilalMuftuoglu/beit-base-patch16-224-65-fold5

Finetuned
(298)
this model

Evaluation results