Manojb commited on
Commit
3af8265
·
verified ·
1 Parent(s): 30ced0a

Cloned from facebook/dinov2-with-registers-base

Browse files
Files changed (4) hide show
  1. README.md +75 -0
  2. config.json +50 -0
  3. model.safetensors +3 -0
  4. preprocessor_config.json +27 -0
README.md ADDED
@@ -0,0 +1,75 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: transformers
3
+ pipeline_tag: image-feature-extraction
4
+ license: apache-2.0
5
+ tags:
6
+ - dino
7
+ - vision
8
+ inference: false
9
+ ---
10
+
11
+ # Vision Transformer (base-sized model) trained using DINOv2, with registers
12
+
13
+ Vision Transformer (ViT) model introduced in the paper [Vision Transformers Need Registers](https://arxiv.org/abs/2309.16588) by Darcet et al. and first released in [this repository](https://github.com/facebookresearch/dinov2).
14
+
15
+ Disclaimer: The team releasing DINOv2 with registers did not write a model card for this model so this model card has been written by the Hugging Face team.
16
+
17
+ ## Model description
18
+
19
+ The Vision Transformer (ViT) is a transformer encoder model (BERT-like) [originally introduced](https://arxiv.org/abs/2010.11929) to do supervised image classification on ImageNet.
20
+
21
+ Next, people figured out ways to make ViT work really well on self-supervised image feature extraction (i.e. learning meaningful features, also called embeddings) on
22
+ images without requiring any labels. Some example papers here include [DINOv2](https://huggingface.co/papers/2304.07193) and [MAE](https://arxiv.org/abs/2111.06377).
23
+
24
+ The authors of DINOv2 noticed that ViTs have artifacts in attention maps. It’s due to the model using some image patches as “registers”. The authors propose a fix: just add some new tokens (called "register" tokens), which you only use during pre-training (and throw away afterwards). This results in:
25
+ - no artifacts
26
+ - interpretable attention maps
27
+ - and improved performances.
28
+
29
+ <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/dinov2_with_registers_visualization.png"
30
+ alt="drawing" width="600"/>
31
+
32
+ <small> Visualization of attention maps of various models trained with vs. without registers. Taken from the <a href="https://arxiv.org/abs/2309.16588">original paper</a>. </small>
33
+
34
+ Note that this model does not include any fine-tuned heads.
35
+
36
+ By pre-training the model, it learns an inner representation of images that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled images for instance, you can train a standard classifier by placing a linear layer on top of the pre-trained encoder. One typically places a linear layer on top of the [CLS] token, as the last hidden state of this token can be seen as a representation of an entire image.
37
+
38
+ ## Intended uses & limitations
39
+
40
+ You can use the raw model for feature extraction. See the [model hub](https://huggingface.co/models?other=dinov2_with_registers) to look for
41
+ fine-tuned versions on a task that interests you.
42
+
43
+ ### How to use
44
+
45
+ Here is how to use this model:
46
+
47
+ ```python
48
+ from transformers import AutoImageProcessor, AutoModel
49
+ from PIL import Image
50
+ import requests
51
+
52
+ url = 'http://images.cocodataset.org/val2017/000000039769.jpg'
53
+ image = Image.open(requests.get(url, stream=True).raw)
54
+
55
+ processor = AutoImageProcessor.from_pretrained('facebook/dinov2-with-registers-base')
56
+ model = AutoModel.from_pretrained('facebook/dinov2-with-registers-base')
57
+
58
+ inputs = processor(images=image, return_tensors="pt")
59
+ outputs = model(**inputs)
60
+ last_hidden_states = outputs.last_hidden_state
61
+ ```
62
+
63
+ ### BibTeX entry and citation info
64
+
65
+ ```bibtex
66
+ @misc{darcet2024visiontransformersneedregisters,
67
+ title={Vision Transformers Need Registers},
68
+ author={Timothée Darcet and Maxime Oquab and Julien Mairal and Piotr Bojanowski},
69
+ year={2024},
70
+ eprint={2309.16588},
71
+ archivePrefix={arXiv},
72
+ primaryClass={cs.CV},
73
+ url={https://arxiv.org/abs/2309.16588},
74
+ }
75
+ ```
config.json ADDED
@@ -0,0 +1,50 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "apply_layernorm": true,
3
+ "architectures": [
4
+ "Dinov2WithRegistersModel"
5
+ ],
6
+ "attention_probs_dropout_prob": 0.0,
7
+ "drop_path_rate": 0.0,
8
+ "hidden_act": "gelu",
9
+ "hidden_dropout_prob": 0.0,
10
+ "hidden_size": 768,
11
+ "image_size": 518,
12
+ "initializer_range": 0.02,
13
+ "interpolate_antialias": true,
14
+ "interpolate_offset": 0.0,
15
+ "layer_norm_eps": 1e-06,
16
+ "layerscale_value": 1.0,
17
+ "mlp_ratio": 4,
18
+ "model_type": "dinov2_with_registers",
19
+ "num_attention_heads": 12,
20
+ "num_channels": 3,
21
+ "num_hidden_layers": 12,
22
+ "num_register_tokens": 4,
23
+ "out_features": [
24
+ "stage12"
25
+ ],
26
+ "out_indices": [
27
+ 12
28
+ ],
29
+ "patch_size": 14,
30
+ "qkv_bias": true,
31
+ "reshape_hidden_states": true,
32
+ "stage_names": [
33
+ "stem",
34
+ "stage1",
35
+ "stage2",
36
+ "stage3",
37
+ "stage4",
38
+ "stage5",
39
+ "stage6",
40
+ "stage7",
41
+ "stage8",
42
+ "stage9",
43
+ "stage10",
44
+ "stage11",
45
+ "stage12"
46
+ ],
47
+ "torch_dtype": "float32",
48
+ "transformers_version": "4.48.0.dev0",
49
+ "use_swiglu_ffn": false
50
+ }
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7a6f7b3b9fa4b8732e707476a03cd6cdce210048582f21aafb7991c17d98e362
3
+ size 346358296
preprocessor_config.json ADDED
@@ -0,0 +1,27 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "crop_size": {
3
+ "height": 224,
4
+ "width": 224
5
+ },
6
+ "do_center_crop": true,
7
+ "do_convert_rgb": true,
8
+ "do_normalize": true,
9
+ "do_rescale": true,
10
+ "do_resize": true,
11
+ "image_mean": [
12
+ 0.485,
13
+ 0.456,
14
+ 0.406
15
+ ],
16
+ "image_processor_type": "BitImageProcessor",
17
+ "image_std": [
18
+ 0.229,
19
+ 0.224,
20
+ 0.225
21
+ ],
22
+ "resample": 3,
23
+ "rescale_factor": 0.00392156862745098,
24
+ "size": {
25
+ "shortest_edge": 256
26
+ }
27
+ }