vit_base_patch16_224-finetuned-SkinDisease

This model is a fine-tuned version of google/vit-base-patch16-224 on a custom skin disease dataset (image_folder format).
It achieves the following results on the evaluation set:

  • Loss: 0.1992
  • Accuracy: 0.9343

🧠 Model description

This model is a Vision Transformer (ViT) fine-tuned for skin disease classification.
It processes input images of size 224x224 pixels and outputs the most likely class.


πŸ”— Intended uses & limitations

  • βœ… For clinical support, not for standalone medical diagnosis.
  • βœ… Designed for educational, research, and proof-of-concept use.

πŸ§ͺ Training and evaluation data

  • Dataset used: Custom image dataset with labeled skin diseases
  • Preprocessing: Resized to 224Γ—224, normalized

βš™οΈ Training procedure

Hyperparameters:

  • Learning Rate: 5e-05
  • Epochs: 10
  • Batch Size: 32
  • Optimizer: Adam
  • Scheduler: Linear with warmup
  • Seed: 42

Training results:

Epoch Val Loss Accuracy
1 0.8248 0.7647
2 0.4236 0.8748
3 0.3154 0.9021
4 0.2695 0.9106
5 0.2381 0.9198
6 0.2407 0.9218
7 0.2160 0.9278
8 0.2121 0.9283
9 0.2044 0.9303
10 0.1992 0.9343

🧰 Framework versions

  • transformers: 4.33.2
  • pytorch: 2.0.0
  • datasets: 2.1.0
  • tokenizers: 0.13.3
Downloads last month
11
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for tejasssuthrave/telidermai

Finetuned
(942)
this model