Update README.md
Browse files
README.md
CHANGED
|
@@ -15,4 +15,31 @@ metrics:
|
|
| 15 |
- bleu
|
| 16 |
library_name: transformers
|
| 17 |
pipeline_tag: image-classification
|
| 18 |
-
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 15 |
- bleu
|
| 16 |
library_name: transformers
|
| 17 |
pipeline_tag: image-classification
|
| 18 |
+
---
|
| 19 |
+
|
| 20 |
+
Model Overview:
|
| 21 |
+
The Vision Transformer (ViT) is a transformer encoder model designed for image recognition tasks. It was pretrained on a large dataset of 14 million images and 21,843 classes known as ImageNet-21k, and fine-tuned on ImageNet 2012, which consists of 1 million images across 1,000 classes.
|
| 22 |
+
|
| 23 |
+
How It Works:
|
| 24 |
+
|
| 25 |
+
Input Representation: Images are split into fixed-size patches (16x16 pixels) and linearly embedded. A special [CLS] token is added at the beginning of the sequence to indicate the image's classification.
|
| 26 |
+
|
| 27 |
+
Transformer Encoder: The model uses a transformer encoder architecture, similar to BERT for text, to process the image patches. Absolute position embeddings are added to encode spatial information before inputting the sequence into transformer layers.
|
| 28 |
+
|
| 29 |
+
Classification: After processing through the transformer layers, the output from the [CLS] token is used for image classification. This token's final hidden state represents the entire image's features.
|
| 30 |
+
|
| 31 |
+
Intended Uses:
|
| 32 |
+
|
| 33 |
+
Image Classification: ViT can be directly used for image classification tasks. By adding a linear layer on top of the [CLS] token, the model can classify images into one of the 1,000 ImageNet classes.
|
| 34 |
+
Limitations:
|
| 35 |
+
|
| 36 |
+
Resolution Dependency: While the model was fine-tuned on ImageNet at 224x224 resolution, better performance is achieved with higher resolutions such as 384x384. Larger models generally yield better results but require more computational resources.
|
| 37 |
+
Training Details:
|
| 38 |
+
|
| 39 |
+
Preprocessing: Images are resized to 224x224 pixels and normalized across RGB channels.
|
| 40 |
+
|
| 41 |
+
Training: Pretraining was conducted on TPUv3 hardware with a batch size of 4096 and learning rate warmup. Gradient clipping was applied during training to enhance stability.
|
| 42 |
+
|
| 43 |
+
Evaluation Results:
|
| 44 |
+
|
| 45 |
+
Performance: Detailed evaluation results on various benchmarks can be found in tables from the original paper. Fine-tuning the model on higher resolutions typically improves classification accuracy.
|