File size: 1,039 Bytes
572b805 76c7590 572b805 3eae94d | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 | ### Model Card
- Model Name: Food Type Image Detection Vision Transformer
- Original Model: Vision Transformer (ViT) model pre-trained on ImageNet-21k (14 million images, 21,843 classes) at resolution 224x224.
- It was introduced in the paper [An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale](https://arxiv.org/abs/2010.11929) by Dosovitskiy et al. and first released in [this repository](https://github.com/google-research/vision_transformer).
- Images are presented to the model as a sequence of fixed-size patches (resolution 16x16), which are linearly embedded.
- Does not provide any fine-tuned heads, as these were zero'd by Google researchers.
- Model Type: Image Classification
- Model Architecture: Vision Transformer (ViT)
- Fine-tuning:
- Fine-tuned on Food Image Classification Dataset by using 12 varieties of these 35 varieties
- Optimizer: AdamW
- Epochs: 20
- Model Performance: Achieved an accuracy of 96.23% on all of the kinds of Food Image Classification Dataset
|