ViT
Collection
ViT For Real Fake Image Classification • 4 items • Updated
How to use date3k2/vit-real-fake-classification-v4 with Transformers:
# Use a pipeline as a high-level helper
from transformers import pipeline
pipe = pipeline("image-classification", model="date3k2/vit-real-fake-classification-v4")
pipe("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/hub/parrots.png") # Load model directly
from transformers import AutoImageProcessor, AutoModelForImageClassification
processor = AutoImageProcessor.from_pretrained("date3k2/vit-real-fake-classification-v4")
model = AutoModelForImageClassification.from_pretrained("date3k2/vit-real-fake-classification-v4")This model is a fine-tuned version of google/vit-base-patch16-224 on an unknown dataset. It achieves the following results on the evaluation set:
More information needed
More information needed
More information needed
The following hyperparameters were used during training:
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Recall | Precision |
|---|---|---|---|---|---|---|---|
| 0.1295 | 1.0 | 233 | 0.2414 | 0.9151 | 0.9280 | 0.9912 | 0.8723 |
| 0.4466 | 2.0 | 466 | 0.1042 | 0.9646 | 0.9680 | 0.9718 | 0.9643 |
| 0.3302 | 3.0 | 699 | 0.0667 | 0.9764 | 0.9786 | 0.9776 | 0.9795 |
| 0.0003 | 4.0 | 932 | 0.0995 | 0.9731 | 0.9758 | 0.9796 | 0.9720 |
| 0.0002 | 5.0 | 1165 | 0.0585 | 0.9796 | 0.9815 | 0.9815 | 0.9815 |