nielsr HF Staff commited on
Commit
94b9e0a
·
1 Parent(s): 7d8a062

Add model card

Browse files
Files changed (1) hide show
  1. README.md +100 -0
README.md ADDED
@@ -0,0 +1,100 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ tags:
4
+ - image-classification
5
+ datasets:
6
+ - imagenet
7
+ - imagenet-21k
8
+ ---
9
+
10
+ # BEiT (base-sized model)
11
+
12
+ BEiT model pre-trained in a self-supervised fashion on ImageNet-21k (14 million images, 21,841 classes) at resolution 224x224, and fine-tuned on ImageNet 2012 (1 million images, 1,000 classes) at resolution 224x224. It was introduced in the paper [BEIT: BERT Pre-Training of Image Transformers](https://arxiv.org/abs/2106.08254) by Hangbo Bao, Li Dong and Furu Wei and first released in [this repository](https://github.com/microsoft/unilm/tree/master/beit).
13
+
14
+ Disclaimer: The team releasing BEiT did not write a model card for this model so this model card has been written by the Hugging Face team.
15
+
16
+ ## Model description
17
+
18
+ The BEiT model is a Vision Transformer (ViT), which is a transformer encoder model (BERT-like). In contrast to the original ViT model, BEiT is pretrained on a large collection of images in a self-supervised fashion, namely ImageNet-21k, at a resolution of 224x224 pixels. The pre-training objective for the model is to predict visual tokens from the encoder of OpenAI's DALL-E's VQ-VAE, based on masked patches.
19
+ Next, the model was fine-tuned in a supervised fashion on ImageNet (also referred to as ILSVRC2012), a dataset comprising 1 million images and 1,000 classes, also at resolution 224x224.
20
+
21
+ Images are presented to the model as a sequence of fixed-size patches (resolution 16x16), which are linearly embedded. One also adds a [CLS] token to the beginning of a sequence to use it for classification tasks. One also adds absolute position embeddings before feeding the sequence to the layers of the Transformer encoder.
22
+
23
+ By pre-training the model, it learns an inner representation of images that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled images for instance, you can train a standard classifier by placing a linear layer on top of the pre-trained encoder. One typically places a linear layer on top of the [CLS] token, as the last hidden state of this token can be seen as a representation of an entire image.
24
+
25
+ ## Intended uses & limitations
26
+
27
+ You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=microsoft/beit) to look for
28
+ fine-tuned versions on a task that interests you.
29
+
30
+ ### How to use
31
+
32
+ Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes:
33
+
34
+ ```python
35
+ from transformers import BEiTFeatureExtractor, BEiTForImageClassification
36
+ from PIL import Image
37
+ import requests
38
+ url = 'http://images.cocodataset.org/val2017/000000039769.jpg'
39
+ image = Image.open(requests.get(url, stream=True).raw)
40
+ feature_extractor = BEiTFeatureExtractor.from_pretrained('microsoft/beit-base-patch16-224')
41
+ model = BEiTForImageClassification.from_pretrained('google/beit-base-patch16-224')
42
+ inputs = feature_extractor(images=image, return_tensors="pt")
43
+ outputs = model(**inputs)
44
+ logits = outputs.logits
45
+ # model predicts one of the 1000 ImageNet classes
46
+ predicted_class_idx = logits.argmax(-1).item()
47
+ print("Predicted class:", model.config.id2label[predicted_class_idx])
48
+ ```
49
+
50
+ Currently, both the feature extractor and model support PyTorch.
51
+
52
+ ## Training data
53
+
54
+ The BEiT model was pretrained on [ImageNet-21k](http://www.image-net.org/), a dataset consisting of 14 million images and 21k classes, and fine-tuned on [ImageNet](http://www.image-net.org/challenges/LSVRC/2012/), a dataset consisting of 1 million images and 1k classes.
55
+
56
+ ## Training procedure
57
+
58
+ ### Preprocessing
59
+
60
+ The exact details of preprocessing of images during training/validation can be found [here](https://github.com/microsoft/unilm/blob/master/beit/datasets.py).
61
+
62
+ Images are resized/rescaled to the same resolution (224x224) and normalized across the RGB channels with mean (0.5, 0.5, 0.5) and standard deviation (0.5, 0.5, 0.5).
63
+
64
+ ### Pretraining
65
+
66
+ For all pre-training related hyperparameters, we refer to page 15 of the [original paper](https://arxiv.org/abs/2106.08254).
67
+
68
+ ## Evaluation results
69
+
70
+ For evaluation results on several image classification benchmarks, we refer to tables 1 and 2 of the original paper. Note that for fine-tuning, the best results are obtained with a higher resolution (384x384). Of course, increasing the model size will result in better performance.
71
+
72
+ ### BibTeX entry and citation info
73
+
74
+ ```@article{DBLP:journals/corr/abs-2106-08254,
75
+ author = {Hangbo Bao and
76
+ Li Dong and
77
+ Furu Wei},
78
+ title = {BEiT: {BERT} Pre-Training of Image Transformers},
79
+ journal = {CoRR},
80
+ volume = {abs/2106.08254},
81
+ year = {2021},
82
+ url = {https://arxiv.org/abs/2106.08254},
83
+ archivePrefix = {arXiv},
84
+ eprint = {2106.08254},
85
+ timestamp = {Tue, 29 Jun 2021 16:55:04 +0200},
86
+ biburl = {https://dblp.org/rec/journals/corr/abs-2106-08254.bib},
87
+ bibsource = {dblp computer science bibliography, https://dblp.org}
88
+ }
89
+ ```
90
+
91
+ ```bibtex
92
+ @inproceedings{deng2009imagenet,
93
+ title={Imagenet: A large-scale hierarchical image database},
94
+ author={Deng, Jia and Dong, Wei and Socher, Richard and Li, Li-Jia and Li, Kai and Fei-Fei, Li},
95
+ booktitle={2009 IEEE conference on computer vision and pattern recognition},
96
+ pages={248--255},
97
+ year={2009},
98
+ organization={Ieee}
99
+ }
100
+ ```