Update README.md
Browse files
README.md
CHANGED
|
@@ -6,10 +6,10 @@ tags:
|
|
| 6 |
widget:
|
| 7 |
- src: https://www.estal.com/FitxersWeb/331958/estal_carroussel_wg_spirits_5.jpg
|
| 8 |
example_title: Glass
|
| 9 |
-
- src: https://
|
| 10 |
-
example_title:
|
| 11 |
-
- src: https://
|
| 12 |
-
example_title:
|
| 13 |
---
|
| 14 |
|
| 15 |
# Vision Transformer (base-sized model)
|
|
@@ -18,14 +18,6 @@ Vision Transformer (ViT) model pre-trained on ImageNet-21k (14 million images, 2
|
|
| 18 |
|
| 19 |
Disclaimer: The team releasing ViT did not write a model card for this model so this model card has been written by the Hugging Face team.
|
| 20 |
|
| 21 |
-
## Model description
|
| 22 |
-
|
| 23 |
-
The Vision Transformer (ViT) is a transformer encoder model (BERT-like) pretrained on a large collection of images in a supervised fashion, namely ImageNet-21k, at a resolution of 224x224 pixels. Next, the model was fine-tuned on ImageNet (also referred to as ILSVRC2012), a dataset comprising 1 million images and 1,000 classes, also at resolution 224x224.
|
| 24 |
-
|
| 25 |
-
Images are presented to the model as a sequence of fixed-size patches (resolution 16x16), which are linearly embedded. One also adds a [CLS] token to the beginning of a sequence to use it for classification tasks. One also adds absolute position embeddings before feeding the sequence to the layers of the Transformer encoder.
|
| 26 |
-
|
| 27 |
-
By pre-training the model, it learns an inner representation of images that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled images for instance, you can train a standard classifier by placing a linear layer on top of the pre-trained encoder. One typically places a linear layer on top of the [CLS] token, as the last hidden state of this token can be seen as a representation of an entire image.
|
| 28 |
-
|
| 29 |
## Dataset
|
| 30 |
|
| 31 |
The dataset used consist of spans six classes: glass, paper, cardboard, plastic, metal, and trash. Currently, the dataset consists of 2527 images:
|
|
|
|
| 6 |
widget:
|
| 7 |
- src: https://www.estal.com/FitxersWeb/331958/estal_carroussel_wg_spirits_5.jpg
|
| 8 |
example_title: Glass
|
| 9 |
+
- src: https://origamijapan.net/wp-content/uploads/2013/10/2_600-1.jpg
|
| 10 |
+
example_title: Paper
|
| 11 |
+
- src: https://i0.wp.com/makezine.com/wp-content/uploads/2016/03/AdobeStock_79098618METAL.jpeg?ssl=1
|
| 12 |
+
example_title: Metal
|
| 13 |
---
|
| 14 |
|
| 15 |
# Vision Transformer (base-sized model)
|
|
|
|
| 18 |
|
| 19 |
Disclaimer: The team releasing ViT did not write a model card for this model so this model card has been written by the Hugging Face team.
|
| 20 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 21 |
## Dataset
|
| 22 |
|
| 23 |
The dataset used consist of spans six classes: glass, paper, cardboard, plastic, metal, and trash. Currently, the dataset consists of 2527 images:
|