42

This model is a fine-tuned version of microsoft/swin-base-patch4-window7-224 on the cifar10 dataset. It achieves the following results on the evaluation set:

  • Loss: 0.0831
  • Accuracy: 0.9804
  • Dt Accuracy: 0.9804
  • Df Accuracy: 0.0335
  • Unlearn Overall Accuracy: 0
  • Unlearn Time: None

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 128
  • eval_batch_size: 256
  • seed: 42
  • optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: linear
  • num_epochs: 20

Training results

Training Loss Epoch Step Validation Loss Accuracy Overall Accuracy Unlearn Overall Accuracy Time
No log 1.0 391 0.0886 0.9955 0.1892 0.1892 None
0.3371 2.0 782 0.0924 0.991 0.1965 0.1965 None
0.3078 3.0 1173 0.0916 0.9795 0.2150 0.2150 None
0.2949 4.0 1564 0.1015 0.9365 0.2808 0.2808 None
0.2949 5.0 1955 0.0866 0.8855 0.3529 0.3529 None
0.2817 6.0 2346 0.0875 0.8205 0.4362 0.4362 None
0.2633 7.0 2737 0.1006 0.7115 0.5582 0.5582 None
0.2379 8.0 3128 0.1087 0.5445 0.7107 0.7107 None
0.2238 9.0 3519 0.0988 0.455 0.7806 0.7806 None
0.2238 10.0 3910 0.0942 0.3465 0.8545 0.8545 None
0.2054 11.0 4301 0.0910 0.307 0.8797 0.8797 None
0.1907 12.0 4692 0.0918 0.206 0.9379 0.9379 None
0.1745 13.0 5083 0.0866 0.2005 0.9419 0.9419 None
0.1745 14.0 5474 0.0863 0.154 0.9669 0.9669 None
0.1633 15.0 5865 0.0869 0.105 0.9908 0.9908 None
0.158 16.0 6256 0.0882 0.072 0 0 None
0.1419 17.0 6647 0.0881 0.0605 0 0 None
0.1388 18.0 7038 0.0868 0.0415 0 0 None
0.1388 19.0 7429 0.0827 0.041 0 0 None
0.1313 20.0 7820 0.0831 0.0335 0 0 None

Framework versions

  • Transformers 4.48.0
  • Pytorch 2.2.2+cu118
  • Datasets 3.2.0
  • Tokenizers 0.21.0
Downloads last month
1
Safetensors
Model size
86.8M params
Tensor type
I64
·
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for jialicheng/unlearn_cifar10_swin-base_random_label_4_42

Finetuned
(326)
this model