Augusto777/dmae-dataset-DA
Viewer • Updated • 232 • 7 • 1
How to use Augusto777/vit-base-patch16-224-MSC-dmae with Transformers:
# Use a pipeline as a high-level helper
from transformers import pipeline
pipe = pipeline("image-classification", model="Augusto777/vit-base-patch16-224-MSC-dmae")
pipe("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/hub/parrots.png") # Load model directly
from transformers import AutoImageProcessor, AutoModelForImageClassification
processor = AutoImageProcessor.from_pretrained("Augusto777/vit-base-patch16-224-MSC-dmae")
model = AutoModelForImageClassification.from_pretrained("Augusto777/vit-base-patch16-224-MSC-dmae")This model is a fine-tuned version of google/vit-base-patch16-224 on an unknown dataset. It achieves the following results on the evaluation set:
More information needed
More information needed
More information needed
The following hyperparameters were used during training:
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|---|---|---|---|---|
| No log | 0.67 | 1 | 1.2258 | 0.5 |
| No log | 2.0 | 3 | 1.0536 | 0.7 |
| No log | 2.67 | 4 | 0.9143 | 0.75 |
| No log | 4.0 | 6 | 0.6899 | 0.9 |
| No log | 4.67 | 7 | 0.6300 | 0.95 |
| No log | 6.0 | 9 | 0.5069 | 0.9 |
| 0.8554 | 6.67 | 10 | 0.4671 | 0.9 |
| 0.8554 | 8.0 | 12 | 0.4312 | 0.9 |
Base model
google/vit-base-patch16-224