CGIAR / README.md
Professor's picture
End of training
2b115b4
|
raw
history blame
3.87 kB
metadata
license: apache-2.0
base_model: google/vit-base-patch16-224-in21k
tags:
  - generated_from_trainer
metrics:
  - accuracy
model-index:
  - name: CGIAR
    results: []

CGIAR

This model is a fine-tuned version of google/vit-base-patch16-224-in21k on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.6887
  • Accuracy: 0.7211

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0001
  • train_batch_size: 32
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 50
  • num_epochs: 40

Training results

Training Loss Epoch Step Validation Loss Accuracy
0.8524 1.0 652 0.7425 0.6910
0.7133 2.0 1304 0.7050 0.7106
0.6627 3.0 1956 0.6887 0.7211
0.5699 4.0 2608 0.6967 0.7244
0.4901 5.0 3260 0.6904 0.7292
0.4374 6.0 3912 0.7500 0.7255
0.3277 7.0 4564 0.8507 0.7083
0.2602 8.0 5216 0.8675 0.7259
0.2324 9.0 5868 0.9844 0.7315
0.1703 10.0 6520 1.0635 0.7112
0.1349 11.0 7172 1.0992 0.7229
0.1203 12.0 7824 1.2086 0.7215
0.1148 13.0 8476 1.2128 0.7250
0.087 14.0 9128 1.2238 0.7300
0.0842 15.0 9780 1.2935 0.7294
0.0765 16.0 10432 1.3468 0.7282
0.0609 17.0 11084 1.3041 0.7271
0.0543 18.0 11736 1.4513 0.7259
0.0507 19.0 12388 1.4640 0.7382
0.0457 20.0 13040 1.5730 0.7298
0.0376 21.0 13692 1.5676 0.7315
0.0434 22.0 14344 1.6493 0.7315
0.0324 23.0 14996 1.6090 0.7294
0.0318 24.0 15648 1.7164 0.7348
0.0256 25.0 16300 1.6981 0.7413
0.0252 26.0 16952 1.6465 0.7317
0.0189 27.0 17604 1.7949 0.7265
0.0211 28.0 18256 1.7796 0.7284
0.0184 29.0 18908 1.8446 0.7319
0.0128 30.0 19560 1.8685 0.7300
0.0098 31.0 20212 1.9648 0.7278
0.008 32.0 20864 1.9593 0.7303
0.006 33.0 21516 1.9797 0.7325
0.0064 34.0 22168 1.9700 0.7401
0.0077 35.0 22820 2.0103 0.7307
0.0036 36.0 23472 2.0696 0.7334
0.0019 37.0 24124 2.0095 0.7374
0.0024 38.0 24776 2.0186 0.7394
0.001 39.0 25428 2.0157 0.7432
0.0013 40.0 26080 2.0033 0.7438

Framework versions

  • Transformers 4.36.1
  • Pytorch 2.0.0
  • Datasets 2.15.0
  • Tokenizers 0.15.0