JohnnyTheFox commited on
Commit
33ed145
·
verified ·
1 Parent(s): 0f86f1b

Upload folder using huggingface_hub

Browse files
models--Intel--dpt-hybrid-midas/refs/main ADDED
@@ -0,0 +1 @@
 
 
1
+ 11eaf7a1cf4bd70740697dbc216f98980c0aeb03
models--Intel--dpt-hybrid-midas/snapshots/11eaf7a1cf4bd70740697dbc216f98980c0aeb03/README.md ADDED
@@ -0,0 +1,166 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ tags:
4
+ - vision
5
+ - depth-estimation
6
+ widget:
7
+ - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg
8
+ example_title: Tiger
9
+ - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg
10
+ example_title: Teapot
11
+ - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg
12
+ example_title: Palace
13
+ model-index:
14
+ - name: dpt-hybrid-midas
15
+ results:
16
+ - task:
17
+ type: monocular-depth-estimation
18
+ name: Monocular Depth Estimation
19
+ dataset:
20
+ type: MIX-6
21
+ name: MIX-6
22
+ metrics:
23
+ - type: Zero-shot transfer
24
+ value: 11.06
25
+ name: Zero-shot transfer
26
+ config: Zero-shot transfer
27
+ verified: false
28
+
29
+ ---
30
+
31
+ ## Model Details: DPT-Hybrid (also known as MiDaS 3.0)
32
+
33
+ Dense Prediction Transformer (DPT) model trained on 1.4 million images for monocular depth estimation.
34
+ It was introduced in the paper [Vision Transformers for Dense Prediction](https://arxiv.org/abs/2103.13413) by Ranftl et al. (2021) and first released in [this repository](https://github.com/isl-org/DPT).
35
+ DPT uses the Vision Transformer (ViT) as backbone and adds a neck + head on top for monocular depth estimation.
36
+ ![model image](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/dpt_architecture.jpg)
37
+
38
+ This repository hosts the "hybrid" version of the model as stated in the paper. DPT-Hybrid diverges from DPT by using [ViT-hybrid](https://huggingface.co/google/vit-hybrid-base-bit-384) as a backbone and taking some activations from the backbone.
39
+
40
+ The model card has been written in combination by the Hugging Face team and Intel.
41
+
42
+ | Model Detail | Description |
43
+ | ----------- | ----------- |
44
+ | Model Authors - Company | Intel |
45
+ | Date | December 22, 2022 |
46
+ | Version | 1 |
47
+ | Type | Computer Vision - Monocular Depth Estimation |
48
+ | Paper or Other Resources | [Vision Transformers for Dense Prediction](https://arxiv.org/abs/2103.13413) and [GitHub Repo](https://github.com/isl-org/DPT) |
49
+ | License | Apache 2.0 |
50
+ | Questions or Comments | [Community Tab](https://huggingface.co/Intel/dpt-hybrid-midas/discussions) and [Intel Developers Discord](https://discord.gg/rv2Gp55UJQ)|
51
+
52
+ | Intended Use | Description |
53
+ | ----------- | ----------- |
54
+ | Primary intended uses | You can use the raw model for zero-shot monocular depth estimation. See the [model hub](https://huggingface.co/models?search=dpt) to look for fine-tuned versions on a task that interests you. |
55
+ | Primary intended users | Anyone doing monocular depth estimation |
56
+ | Out-of-scope uses | This model in most cases will need to be fine-tuned for your particular task. The model should not be used to intentionally create hostile or alienating environments for people.|
57
+
58
+ ### How to use
59
+
60
+ Here is how to use this model for zero-shot depth estimation on an image:
61
+
62
+ ```python
63
+ from PIL import Image
64
+ import numpy as np
65
+ import requests
66
+ import torch
67
+
68
+ from transformers import DPTImageProcessor, DPTForDepthEstimation
69
+
70
+ image_processor = DPTImageProcessor.from_pretrained("Intel/dpt-hybrid-midas")
71
+ model = DPTForDepthEstimation.from_pretrained("Intel/dpt-hybrid-midas", low_cpu_mem_usage=True)
72
+
73
+ url = "http://images.cocodataset.org/val2017/000000039769.jpg"
74
+ image = Image.open(requests.get(url, stream=True).raw)
75
+
76
+ # prepare image for the model
77
+ inputs = image_processor(images=image, return_tensors="pt")
78
+
79
+ with torch.no_grad():
80
+ outputs = model(**inputs)
81
+ predicted_depth = outputs.predicted_depth
82
+
83
+ # interpolate to original size
84
+ prediction = torch.nn.functional.interpolate(
85
+ predicted_depth.unsqueeze(1),
86
+ size=image.size[::-1],
87
+ mode="bicubic",
88
+ align_corners=False,
89
+ )
90
+
91
+ # visualize the prediction
92
+ output = prediction.squeeze().cpu().numpy()
93
+ formatted = (output * 255 / np.max(output)).astype("uint8")
94
+ depth = Image.fromarray(formatted)
95
+ depth.show()
96
+ ```
97
+
98
+ For more code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/master/en/model_doc/dpt).
99
+
100
+ | Factors | Description |
101
+ | ----------- | ----------- |
102
+ | Groups | Multiple datasets compiled together |
103
+ | Instrumentation | - |
104
+ | Environment | Inference completed on Intel Xeon Platinum 8280 CPU @ 2.70GHz with 8 physical cores and an NVIDIA RTX 2080 GPU. |
105
+ | Card Prompts | Model deployment on alternate hardware and software will change model performance |
106
+
107
+ | Metrics | Description |
108
+ | ----------- | ----------- |
109
+ | Model performance measures | Zero-shot Transfer |
110
+ | Decision thresholds | - |
111
+ | Approaches to uncertainty and variability | - |
112
+
113
+ | Training and Evaluation Data | Description |
114
+ | ----------- | ----------- |
115
+ | Datasets | The dataset is called MIX 6, and contains around 1.4M images. The model was initialized with ImageNet-pretrained weights.|
116
+ | Motivation | To build a robust monocular depth prediction network |
117
+ | Preprocessing | "We resize the image such that the longer side is 384 pixels and train on random square crops of size 384. ... We perform random horizontal flips for data augmentation." See [Ranftl et al. (2021)](https://arxiv.org/abs/2103.13413) for more details. |
118
+
119
+ ## Quantitative Analyses
120
+ | Model | Training set | DIW WHDR | ETH3D AbsRel | Sintel AbsRel | KITTI δ>1.25 | NYU δ>1.25 | TUM δ>1.25 |
121
+ | --- | --- | --- | --- | --- | --- | --- | --- |
122
+ | DPT - Large | MIX 6 | 10.82 (-13.2%) | 0.089 (-31.2%) | 0.270 (-17.5%) | 8.46 (-64.6%) | 8.32 (-12.9%) | 9.97 (-30.3%) |
123
+ | DPT - Hybrid | MIX 6 | 11.06 (-11.2%) | 0.093 (-27.6%) | 0.274 (-16.2%) | 11.56 (-51.6%) | 8.69 (-9.0%) | 10.89 (-23.2%) |
124
+ | MiDaS | MIX 6 | 12.95 (+3.9%) | 0.116 (-10.5%) | 0.329 (+0.5%) | 16.08 (-32.7%) | 8.71 (-8.8%) | 12.51 (-12.5%)
125
+ | MiDaS [30] | MIX 5 | 12.46 | 0.129 | 0.327 | 23.90 | 9.55 | 14.29 |
126
+ | Li [22] | MD [22] | 23.15 | 0.181 | 0.385 | 36.29 | 27.52 | 29.54 |
127
+ | Li [21] | MC [21] | 26.52 | 0.183 | 0.405 | 47.94 | 18.57 | 17.71 |
128
+ | Wang [40] | WS [40] | 19.09 | 0.205 | 0.390 | 31.92 | 29.57 | 20.18 |
129
+ | Xian [45] | RW [45] | 14.59 | 0.186 | 0.422 | 34.08 | 27.00 | 25.02 |
130
+ | Casser [5] | CS [8] | 32.80 | 0.235 | 0.422 | 21.15 | 39.58 | 37.18 |
131
+
132
+ Table 1. Comparison to the state of the art on monocular depth estimation. We evaluate zero-shot cross-dataset transfer according to the
133
+ protocol defined in [30]. Relative performance is computed with respect to the original MiDaS model [30]. Lower is better for all metrics. ([Ranftl et al., 2021](https://arxiv.org/abs/2103.13413))
134
+
135
+
136
+ | Ethical Considerations | Description |
137
+ | ----------- | ----------- |
138
+ | Data | The training data come from multiple image datasets compiled together. |
139
+ | Human life | The model is not intended to inform decisions central to human life or flourishing. It is an aggregated set of monocular depth image datasets. |
140
+ | Mitigations | No additional risk mitigation strategies were considered during model development. |
141
+ | Risks and harms | The extent of the risks involved by using the model remain unknown. |
142
+ | Use cases | - |
143
+
144
+ | Caveats and Recommendations |
145
+ | ----------- |
146
+ | Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. There are no additional caveats or recommendations for this model. |
147
+
148
+ ### BibTeX entry and citation info
149
+
150
+ ```bibtex
151
+ @article{DBLP:journals/corr/abs-2103-13413,
152
+ author = {Ren{\'{e}} Ranftl and
153
+ Alexey Bochkovskiy and
154
+ Vladlen Koltun},
155
+ title = {Vision Transformers for Dense Prediction},
156
+ journal = {CoRR},
157
+ volume = {abs/2103.13413},
158
+ year = {2021},
159
+ url = {https://arxiv.org/abs/2103.13413},
160
+ eprinttype = {arXiv},
161
+ eprint = {2103.13413},
162
+ timestamp = {Wed, 07 Apr 2021 15:31:46 +0200},
163
+ biburl = {https://dblp.org/rec/journals/corr/abs-2103-13413.bib},
164
+ bibsource = {dblp computer science bibliography, https://dblp.org}
165
+ }
166
+ ```
models--Intel--dpt-hybrid-midas/snapshots/11eaf7a1cf4bd70740697dbc216f98980c0aeb03/config.json ADDED
@@ -0,0 +1,459 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_commit_hash": null,
3
+ "architectures": [
4
+ "DPTForDepthEstimation"
5
+ ],
6
+ "attention_probs_dropout_prob": 0.0,
7
+ "auxiliary_loss_weight": 0.4,
8
+ "backbone_config": {
9
+ "_name_or_path": "",
10
+ "add_cross_attention": false,
11
+ "architectures": null,
12
+ "bad_words_ids": null,
13
+ "begin_suppress_tokens": null,
14
+ "bos_token_id": null,
15
+ "chunk_size_feed_forward": 0,
16
+ "cross_attention_hidden_size": null,
17
+ "decoder_start_token_id": null,
18
+ "depths": [
19
+ 3,
20
+ 4,
21
+ 9
22
+ ],
23
+ "diversity_penalty": 0.0,
24
+ "do_sample": false,
25
+ "drop_path_rate": 0.0,
26
+ "early_stopping": false,
27
+ "embedding_dynamic_padding": true,
28
+ "embedding_size": 64,
29
+ "encoder_no_repeat_ngram_size": 0,
30
+ "eos_token_id": null,
31
+ "exponential_decay_length_penalty": null,
32
+ "finetuning_task": null,
33
+ "forced_bos_token_id": null,
34
+ "forced_eos_token_id": null,
35
+ "global_padding": "SAME",
36
+ "hidden_act": "relu",
37
+ "hidden_sizes": [
38
+ 256,
39
+ 512,
40
+ 1024,
41
+ 2048
42
+ ],
43
+ "id2label": {
44
+ "0": "LABEL_0",
45
+ "1": "LABEL_1"
46
+ },
47
+ "is_decoder": false,
48
+ "is_encoder_decoder": false,
49
+ "label2id": {
50
+ "LABEL_0": 0,
51
+ "LABEL_1": 1
52
+ },
53
+ "layer_type": "bottleneck",
54
+ "length_penalty": 1.0,
55
+ "max_length": 20,
56
+ "min_length": 0,
57
+ "model_type": "bit",
58
+ "no_repeat_ngram_size": 0,
59
+ "num_beam_groups": 1,
60
+ "num_beams": 1,
61
+ "num_channels": 3,
62
+ "num_groups": 32,
63
+ "num_return_sequences": 1,
64
+ "out_features": [
65
+ "stage1",
66
+ "stage2",
67
+ "stage3"
68
+ ],
69
+ "output_attentions": false,
70
+ "output_hidden_states": false,
71
+ "output_scores": false,
72
+ "output_stride": 32,
73
+ "pad_token_id": null,
74
+ "prefix": null,
75
+ "problem_type": null,
76
+ "pruned_heads": {},
77
+ "remove_invalid_values": false,
78
+ "repetition_penalty": 1.0,
79
+ "return_dict": true,
80
+ "return_dict_in_generate": false,
81
+ "sep_token_id": null,
82
+ "stage_names": [
83
+ "stem",
84
+ "stage1",
85
+ "stage2",
86
+ "stage3"
87
+ ],
88
+ "suppress_tokens": null,
89
+ "task_specific_params": null,
90
+ "temperature": 1.0,
91
+ "tf_legacy_loss": false,
92
+ "tie_encoder_decoder": false,
93
+ "tie_word_embeddings": true,
94
+ "tokenizer_class": null,
95
+ "top_k": 50,
96
+ "top_p": 1.0,
97
+ "torch_dtype": null,
98
+ "torchscript": false,
99
+ "transformers_version": "4.26.0.dev0",
100
+ "typical_p": 1.0,
101
+ "use_bfloat16": false,
102
+ "width_factor": 1
103
+ },
104
+ "backbone_featmap_shape": [
105
+ 1,
106
+ 1024,
107
+ 24,
108
+ 24
109
+ ],
110
+ "backbone_out_indices": [
111
+ 2,
112
+ 5,
113
+ 8,
114
+ 11
115
+ ],
116
+ "fusion_hidden_size": 256,
117
+ "head_in_index": -1,
118
+ "hidden_act": "gelu",
119
+ "hidden_dropout_prob": 0.0,
120
+ "hidden_size": 768,
121
+ "id2label": {
122
+ "0": "LABEL_0",
123
+ "1": "LABEL_1",
124
+ "2": "LABEL_2",
125
+ "3": "LABEL_3",
126
+ "4": "LABEL_4",
127
+ "5": "LABEL_5",
128
+ "6": "LABEL_6",
129
+ "7": "LABEL_7",
130
+ "8": "LABEL_8",
131
+ "9": "LABEL_9",
132
+ "10": "LABEL_10",
133
+ "11": "LABEL_11",
134
+ "12": "LABEL_12",
135
+ "13": "LABEL_13",
136
+ "14": "LABEL_14",
137
+ "15": "LABEL_15",
138
+ "16": "LABEL_16",
139
+ "17": "LABEL_17",
140
+ "18": "LABEL_18",
141
+ "19": "LABEL_19",
142
+ "20": "LABEL_20",
143
+ "21": "LABEL_21",
144
+ "22": "LABEL_22",
145
+ "23": "LABEL_23",
146
+ "24": "LABEL_24",
147
+ "25": "LABEL_25",
148
+ "26": "LABEL_26",
149
+ "27": "LABEL_27",
150
+ "28": "LABEL_28",
151
+ "29": "LABEL_29",
152
+ "30": "LABEL_30",
153
+ "31": "LABEL_31",
154
+ "32": "LABEL_32",
155
+ "33": "LABEL_33",
156
+ "34": "LABEL_34",
157
+ "35": "LABEL_35",
158
+ "36": "LABEL_36",
159
+ "37": "LABEL_37",
160
+ "38": "LABEL_38",
161
+ "39": "LABEL_39",
162
+ "40": "LABEL_40",
163
+ "41": "LABEL_41",
164
+ "42": "LABEL_42",
165
+ "43": "LABEL_43",
166
+ "44": "LABEL_44",
167
+ "45": "LABEL_45",
168
+ "46": "LABEL_46",
169
+ "47": "LABEL_47",
170
+ "48": "LABEL_48",
171
+ "49": "LABEL_49",
172
+ "50": "LABEL_50",
173
+ "51": "LABEL_51",
174
+ "52": "LABEL_52",
175
+ "53": "LABEL_53",
176
+ "54": "LABEL_54",
177
+ "55": "LABEL_55",
178
+ "56": "LABEL_56",
179
+ "57": "LABEL_57",
180
+ "58": "LABEL_58",
181
+ "59": "LABEL_59",
182
+ "60": "LABEL_60",
183
+ "61": "LABEL_61",
184
+ "62": "LABEL_62",
185
+ "63": "LABEL_63",
186
+ "64": "LABEL_64",
187
+ "65": "LABEL_65",
188
+ "66": "LABEL_66",
189
+ "67": "LABEL_67",
190
+ "68": "LABEL_68",
191
+ "69": "LABEL_69",
192
+ "70": "LABEL_70",
193
+ "71": "LABEL_71",
194
+ "72": "LABEL_72",
195
+ "73": "LABEL_73",
196
+ "74": "LABEL_74",
197
+ "75": "LABEL_75",
198
+ "76": "LABEL_76",
199
+ "77": "LABEL_77",
200
+ "78": "LABEL_78",
201
+ "79": "LABEL_79",
202
+ "80": "LABEL_80",
203
+ "81": "LABEL_81",
204
+ "82": "LABEL_82",
205
+ "83": "LABEL_83",
206
+ "84": "LABEL_84",
207
+ "85": "LABEL_85",
208
+ "86": "LABEL_86",
209
+ "87": "LABEL_87",
210
+ "88": "LABEL_88",
211
+ "89": "LABEL_89",
212
+ "90": "LABEL_90",
213
+ "91": "LABEL_91",
214
+ "92": "LABEL_92",
215
+ "93": "LABEL_93",
216
+ "94": "LABEL_94",
217
+ "95": "LABEL_95",
218
+ "96": "LABEL_96",
219
+ "97": "LABEL_97",
220
+ "98": "LABEL_98",
221
+ "99": "LABEL_99",
222
+ "100": "LABEL_100",
223
+ "101": "LABEL_101",
224
+ "102": "LABEL_102",
225
+ "103": "LABEL_103",
226
+ "104": "LABEL_104",
227
+ "105": "LABEL_105",
228
+ "106": "LABEL_106",
229
+ "107": "LABEL_107",
230
+ "108": "LABEL_108",
231
+ "109": "LABEL_109",
232
+ "110": "LABEL_110",
233
+ "111": "LABEL_111",
234
+ "112": "LABEL_112",
235
+ "113": "LABEL_113",
236
+ "114": "LABEL_114",
237
+ "115": "LABEL_115",
238
+ "116": "LABEL_116",
239
+ "117": "LABEL_117",
240
+ "118": "LABEL_118",
241
+ "119": "LABEL_119",
242
+ "120": "LABEL_120",
243
+ "121": "LABEL_121",
244
+ "122": "LABEL_122",
245
+ "123": "LABEL_123",
246
+ "124": "LABEL_124",
247
+ "125": "LABEL_125",
248
+ "126": "LABEL_126",
249
+ "127": "LABEL_127",
250
+ "128": "LABEL_128",
251
+ "129": "LABEL_129",
252
+ "130": "LABEL_130",
253
+ "131": "LABEL_131",
254
+ "132": "LABEL_132",
255
+ "133": "LABEL_133",
256
+ "134": "LABEL_134",
257
+ "135": "LABEL_135",
258
+ "136": "LABEL_136",
259
+ "137": "LABEL_137",
260
+ "138": "LABEL_138",
261
+ "139": "LABEL_139",
262
+ "140": "LABEL_140",
263
+ "141": "LABEL_141",
264
+ "142": "LABEL_142",
265
+ "143": "LABEL_143",
266
+ "144": "LABEL_144",
267
+ "145": "LABEL_145",
268
+ "146": "LABEL_146",
269
+ "147": "LABEL_147",
270
+ "148": "LABEL_148",
271
+ "149": "LABEL_149"
272
+ },
273
+ "image_size": 384,
274
+ "initializer_range": 0.02,
275
+ "intermediate_size": 3072,
276
+ "is_hybrid": true,
277
+ "label2id": {
278
+ "LABEL_0": 0,
279
+ "LABEL_1": 1,
280
+ "LABEL_10": 10,
281
+ "LABEL_100": 100,
282
+ "LABEL_101": 101,
283
+ "LABEL_102": 102,
284
+ "LABEL_103": 103,
285
+ "LABEL_104": 104,
286
+ "LABEL_105": 105,
287
+ "LABEL_106": 106,
288
+ "LABEL_107": 107,
289
+ "LABEL_108": 108,
290
+ "LABEL_109": 109,
291
+ "LABEL_11": 11,
292
+ "LABEL_110": 110,
293
+ "LABEL_111": 111,
294
+ "LABEL_112": 112,
295
+ "LABEL_113": 113,
296
+ "LABEL_114": 114,
297
+ "LABEL_115": 115,
298
+ "LABEL_116": 116,
299
+ "LABEL_117": 117,
300
+ "LABEL_118": 118,
301
+ "LABEL_119": 119,
302
+ "LABEL_12": 12,
303
+ "LABEL_120": 120,
304
+ "LABEL_121": 121,
305
+ "LABEL_122": 122,
306
+ "LABEL_123": 123,
307
+ "LABEL_124": 124,
308
+ "LABEL_125": 125,
309
+ "LABEL_126": 126,
310
+ "LABEL_127": 127,
311
+ "LABEL_128": 128,
312
+ "LABEL_129": 129,
313
+ "LABEL_13": 13,
314
+ "LABEL_130": 130,
315
+ "LABEL_131": 131,
316
+ "LABEL_132": 132,
317
+ "LABEL_133": 133,
318
+ "LABEL_134": 134,
319
+ "LABEL_135": 135,
320
+ "LABEL_136": 136,
321
+ "LABEL_137": 137,
322
+ "LABEL_138": 138,
323
+ "LABEL_139": 139,
324
+ "LABEL_14": 14,
325
+ "LABEL_140": 140,
326
+ "LABEL_141": 141,
327
+ "LABEL_142": 142,
328
+ "LABEL_143": 143,
329
+ "LABEL_144": 144,
330
+ "LABEL_145": 145,
331
+ "LABEL_146": 146,
332
+ "LABEL_147": 147,
333
+ "LABEL_148": 148,
334
+ "LABEL_149": 149,
335
+ "LABEL_15": 15,
336
+ "LABEL_16": 16,
337
+ "LABEL_17": 17,
338
+ "LABEL_18": 18,
339
+ "LABEL_19": 19,
340
+ "LABEL_2": 2,
341
+ "LABEL_20": 20,
342
+ "LABEL_21": 21,
343
+ "LABEL_22": 22,
344
+ "LABEL_23": 23,
345
+ "LABEL_24": 24,
346
+ "LABEL_25": 25,
347
+ "LABEL_26": 26,
348
+ "LABEL_27": 27,
349
+ "LABEL_28": 28,
350
+ "LABEL_29": 29,
351
+ "LABEL_3": 3,
352
+ "LABEL_30": 30,
353
+ "LABEL_31": 31,
354
+ "LABEL_32": 32,
355
+ "LABEL_33": 33,
356
+ "LABEL_34": 34,
357
+ "LABEL_35": 35,
358
+ "LABEL_36": 36,
359
+ "LABEL_37": 37,
360
+ "LABEL_38": 38,
361
+ "LABEL_39": 39,
362
+ "LABEL_4": 4,
363
+ "LABEL_40": 40,
364
+ "LABEL_41": 41,
365
+ "LABEL_42": 42,
366
+ "LABEL_43": 43,
367
+ "LABEL_44": 44,
368
+ "LABEL_45": 45,
369
+ "LABEL_46": 46,
370
+ "LABEL_47": 47,
371
+ "LABEL_48": 48,
372
+ "LABEL_49": 49,
373
+ "LABEL_5": 5,
374
+ "LABEL_50": 50,
375
+ "LABEL_51": 51,
376
+ "LABEL_52": 52,
377
+ "LABEL_53": 53,
378
+ "LABEL_54": 54,
379
+ "LABEL_55": 55,
380
+ "LABEL_56": 56,
381
+ "LABEL_57": 57,
382
+ "LABEL_58": 58,
383
+ "LABEL_59": 59,
384
+ "LABEL_6": 6,
385
+ "LABEL_60": 60,
386
+ "LABEL_61": 61,
387
+ "LABEL_62": 62,
388
+ "LABEL_63": 63,
389
+ "LABEL_64": 64,
390
+ "LABEL_65": 65,
391
+ "LABEL_66": 66,
392
+ "LABEL_67": 67,
393
+ "LABEL_68": 68,
394
+ "LABEL_69": 69,
395
+ "LABEL_7": 7,
396
+ "LABEL_70": 70,
397
+ "LABEL_71": 71,
398
+ "LABEL_72": 72,
399
+ "LABEL_73": 73,
400
+ "LABEL_74": 74,
401
+ "LABEL_75": 75,
402
+ "LABEL_76": 76,
403
+ "LABEL_77": 77,
404
+ "LABEL_78": 78,
405
+ "LABEL_79": 79,
406
+ "LABEL_8": 8,
407
+ "LABEL_80": 80,
408
+ "LABEL_81": 81,
409
+ "LABEL_82": 82,
410
+ "LABEL_83": 83,
411
+ "LABEL_84": 84,
412
+ "LABEL_85": 85,
413
+ "LABEL_86": 86,
414
+ "LABEL_87": 87,
415
+ "LABEL_88": 88,
416
+ "LABEL_89": 89,
417
+ "LABEL_9": 9,
418
+ "LABEL_90": 90,
419
+ "LABEL_91": 91,
420
+ "LABEL_92": 92,
421
+ "LABEL_93": 93,
422
+ "LABEL_94": 94,
423
+ "LABEL_95": 95,
424
+ "LABEL_96": 96,
425
+ "LABEL_97": 97,
426
+ "LABEL_98": 98,
427
+ "LABEL_99": 99
428
+ },
429
+ "layer_norm_eps": 1e-12,
430
+ "model_type": "dpt",
431
+ "neck_hidden_sizes": [
432
+ 256,
433
+ 512,
434
+ 768,
435
+ 768
436
+ ],
437
+ "neck_ignore_stages": [
438
+ 0,
439
+ 1
440
+ ],
441
+ "num_attention_heads": 12,
442
+ "num_channels": 3,
443
+ "num_hidden_layers": 12,
444
+ "patch_size": 16,
445
+ "qkv_bias": true,
446
+ "readout_type": "project",
447
+ "reassemble_factors": [
448
+ 1,
449
+ 1,
450
+ 1,
451
+ 0.5
452
+ ],
453
+ "semantic_classifier_dropout": 0.1,
454
+ "semantic_loss_ignore_index": 255,
455
+ "torch_dtype": "float32",
456
+ "transformers_version": null,
457
+ "use_auxiliary_head": true,
458
+ "use_batch_norm_in_fusion_residual": false
459
+ }
models--Intel--dpt-hybrid-midas/snapshots/11eaf7a1cf4bd70740697dbc216f98980c0aeb03/pytorch_model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b6c4d44f9d96ca3fa76dd3bbb153989a60b4ad5526559f3c598562a368d687ec
3
+ size 489648389