87

This model is a fine-tuned version of microsoft/swin-base-patch4-window7-224 on the cifar100 dataset. It achieves the following results on the evaluation set:

  • Loss: 13.0270
  • Accuracy: 0.3061
  • Dt Accuracy: 0.3061
  • Df Accuracy: 0.194
  • Unlearn Overall Accuracy: 0
  • Unlearn Time: None

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0002
  • train_batch_size: 128
  • eval_batch_size: 256
  • seed: 87
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 20

Training results

Training Loss Epoch Step Validation Loss Accuracy Overall Accuracy Unlearn Overall Accuracy Time
No log 1.0 16 1.5558 0.693 0 0 None
No log 2.0 32 1.5415 0.686 0 0 None
No log 3.0 48 1.8880 0.661 0 0 None
No log 4.0 64 2.3622 0.618 0 0 None
No log 5.0 80 3.2308 0.529 0 0 None
No log 6.0 96 3.3136 0.541 0 0 None
No log 7.0 112 3.8720 0.495 0 0 None
No log 8.0 128 3.7642 0.509 0 0 None
No log 9.0 144 4.3507 0.468 0 0 None
No log 10.0 160 5.2100 0.417 0 0 None
No log 11.0 176 5.5091 0.397 0 0 None
No log 12.0 192 6.1317 0.352 0 0 None
No log 13.0 208 6.8544 0.327 0 0 None
No log 14.0 224 8.7042 0.29 0 0 None
No log 15.0 240 8.6705 0.278 0 0 None
No log 16.0 256 8.6764 0.257 0 0 None
No log 17.0 272 9.9202 0.241 0 0 None
No log 18.0 288 11.7378 0.206 0 0 None
No log 19.0 304 11.2166 0.212 0 0 None
No log 20.0 320 13.0270 0.194 0 0 None

Framework versions

  • Transformers 4.39.3
  • Pytorch 2.2.2+cu118
  • Datasets 2.18.0
  • Tokenizers 0.15.2
Downloads last month
1
Safetensors
Model size
86.9M params
Tensor type
I64
·
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for jialicheng/unlearn_cifar100_swin-base_bad_teaching_2_87

Finetuned
(326)
this model