SobanHM sayakpaul HF Staff commited on
Commit
83c0bbf
·
0 Parent(s):

Duplicate from sayakpaul/nyu_depth_v2

Browse files

Co-authored-by: Sayak Paul <sayakpaul@users.noreply.huggingface.co>

.gitattributes ADDED
@@ -0,0 +1,55 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
5
+ *.ckpt filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.lz4 filter=lfs diff=lfs merge=lfs -text
12
+ *.mlmodel filter=lfs diff=lfs merge=lfs -text
13
+ *.model filter=lfs diff=lfs merge=lfs -text
14
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
15
+ *.npy filter=lfs diff=lfs merge=lfs -text
16
+ *.npz filter=lfs diff=lfs merge=lfs -text
17
+ *.onnx filter=lfs diff=lfs merge=lfs -text
18
+ *.ot filter=lfs diff=lfs merge=lfs -text
19
+ *.parquet filter=lfs diff=lfs merge=lfs -text
20
+ *.pb filter=lfs diff=lfs merge=lfs -text
21
+ *.pickle filter=lfs diff=lfs merge=lfs -text
22
+ *.pkl filter=lfs diff=lfs merge=lfs -text
23
+ *.pt filter=lfs diff=lfs merge=lfs -text
24
+ *.pth filter=lfs diff=lfs merge=lfs -text
25
+ *.rar filter=lfs diff=lfs merge=lfs -text
26
+ *.safetensors filter=lfs diff=lfs merge=lfs -text
27
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
28
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
29
+ *.tflite filter=lfs diff=lfs merge=lfs -text
30
+ *.tgz filter=lfs diff=lfs merge=lfs -text
31
+ *.wasm filter=lfs diff=lfs merge=lfs -text
32
+ *.xz filter=lfs diff=lfs merge=lfs -text
33
+ *.zip filter=lfs diff=lfs merge=lfs -text
34
+ *.zst filter=lfs diff=lfs merge=lfs -text
35
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ # Audio files - uncompressed
37
+ *.pcm filter=lfs diff=lfs merge=lfs -text
38
+ *.sam filter=lfs diff=lfs merge=lfs -text
39
+ *.raw filter=lfs diff=lfs merge=lfs -text
40
+ # Audio files - compressed
41
+ *.aac filter=lfs diff=lfs merge=lfs -text
42
+ *.flac filter=lfs diff=lfs merge=lfs -text
43
+ *.mp3 filter=lfs diff=lfs merge=lfs -text
44
+ *.ogg filter=lfs diff=lfs merge=lfs -text
45
+ *.wav filter=lfs diff=lfs merge=lfs -text
46
+ # Image files - uncompressed
47
+ *.bmp filter=lfs diff=lfs merge=lfs -text
48
+ *.gif filter=lfs diff=lfs merge=lfs -text
49
+ *.png filter=lfs diff=lfs merge=lfs -text
50
+ *.tiff filter=lfs diff=lfs merge=lfs -text
51
+ # Image files - compressed
52
+ *.jpg filter=lfs diff=lfs merge=lfs -text
53
+ *.jpeg filter=lfs diff=lfs merge=lfs -text
54
+ *.webp filter=lfs diff=lfs merge=lfs -text
55
+ *.tar filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,246 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ language:
4
+ - en
5
+ multilinguality:
6
+ - monolingual
7
+ size_categories:
8
+ - 10K<n<100K
9
+ task_categories:
10
+ - depth-estimation
11
+ task_ids: []
12
+ pretty_name: NYU Depth V2
13
+ tags:
14
+ - depth-estimation
15
+ paperswithcode_id: nyuv2
16
+ dataset_info:
17
+ features:
18
+ - name: image
19
+ dtype: image
20
+ - name: depth_map
21
+ dtype: image
22
+ splits:
23
+ - name: train
24
+ num_bytes: 20212097551
25
+ num_examples: 47584
26
+ - name: validation
27
+ num_bytes: 240785762
28
+ num_examples: 654
29
+ download_size: 35151124480
30
+ dataset_size: 20452883313
31
+ ---
32
+
33
+ # Dataset Card for NYU Depth V2
34
+
35
+ ## Table of Contents
36
+ - [Table of Contents](#table-of-contents)
37
+ - [Dataset Description](#dataset-description)
38
+ - [Dataset Summary](#dataset-summary)
39
+ - [Supported Tasks](#supported-tasks)
40
+ - [Languages](#languages)
41
+ - [Dataset Structure](#dataset-structure)
42
+ - [Data Instances](#data-instances)
43
+ - [Data Fields](#data-fields)
44
+ - [Data Splits](#data-splits)
45
+ - [Visualization](#visualization)
46
+ - [Dataset Creation](#dataset-creation)
47
+ - [Curation Rationale](#curation-rationale)
48
+ - [Source Data](#source-data)
49
+ - [Annotations](#annotations)
50
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
51
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
52
+ - [Social Impact of Dataset](#social-impact-of-dataset)
53
+ - [Discussion of Biases](#discussion-of-biases)
54
+ - [Other Known Limitations](#other-known-limitations)
55
+ - [Additional Information](#additional-information)
56
+ - [Dataset Curators](#dataset-curators)
57
+ - [Licensing Information](#licensing-information)
58
+ - [Citation Information](#citation-information)
59
+ - [Contributions](#contributions)
60
+
61
+
62
+ ## Dataset Description
63
+
64
+ - **Homepage:** [NYU Depth Dataset V2 homepage](https://cs.nyu.edu/~silberman/datasets/nyu_depth_v2.html)
65
+ - **Repository:** Fast Depth [repository](https://github.com/dwofk/fast-depth) which was used to source the dataset in this repository. It is a preprocessed version of the original NYU Depth V2 dataset linked above. It is also used in [TensorFlow Datasets](https://www.tensorflow.org/datasets/catalog/nyu_depth_v2).
66
+ - **Papers:** [Indoor Segmentation and Support Inference from RGBD Images](http://cs.nyu.edu/~silberman/papers/indoor_seg_support.pdf) and [FastDepth: Fast Monocular Depth Estimation on Embedded Systems](https://arxiv.org/abs/1903.03273)
67
+ - **Point of Contact:** [Nathan Silberman](mailto:silberman@@cs.nyu.edu) and [Diana Wofk](mailto:dwofk@alum.mit.edu)
68
+
69
+ ### Dataset Summary
70
+
71
+ As per the [dataset homepage](https://cs.nyu.edu/~silberman/datasets/nyu_depth_v2.html):
72
+
73
+ The NYU-Depth V2 data set is comprised of video sequences from a variety of indoor scenes as recorded by both the RGB and Depth cameras from the Microsoft [Kinect](http://www.xbox.com/kinect). It features:
74
+
75
+ * 1449 densely labeled pairs of aligned RGB and depth images
76
+ * 464 new scenes taken from 3 cities
77
+ * 407,024 new unlabeled frames
78
+ * Each object is labeled with a class and an instance number (cup1, cup2, cup3, etc)
79
+
80
+ The dataset has several components:
81
+
82
+ * Labeled: A subset of the video data accompanied by dense multi-class labels. This data has also been preprocessed to fill in missing depth labels.
83
+ * Raw: The raw rgb, depth and accelerometer data as provided by the Kinect.
84
+ * Toolbox: Useful functions for manipulating the data and labels.
85
+
86
+ ### Supported Tasks
87
+
88
+ - `depth-estimation`: Depth estimation is the task of approximating the perceived depth of a given image. In other words, it's about measuring the distance of each image pixel from the camera.
89
+ - `semantic-segmentation`: Semantic segmentation is the task of associating every pixel of an image to a class label.
90
+
91
+ There are other tasks supported by this dataset as well. You can find more about them by referring to [this resource](https://paperswithcode.com/dataset/nyuv2).
92
+
93
+
94
+ ### Languages
95
+
96
+ English.
97
+
98
+ ## Dataset Structure
99
+
100
+ ### Data Instances
101
+
102
+ A data point comprises an image and its annotation depth map for both the `train` and `validation` splits.
103
+
104
+ ```
105
+ {
106
+ 'image': <PIL.PngImagePlugin.PngImageFile image mode=RGB at 0x1FF32A3EDA0>,
107
+ 'depth_map': <PIL.PngImagePlugin.PngImageFile image mode=L at 0x1FF32E5B978>,
108
+ }
109
+ ```
110
+
111
+ ### Data Fields
112
+
113
+ - `image`: A `PIL.Image.Image` object containing the image. Note that when accessing the image column: `dataset[0]["image"]` the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the `"image"` column, *i.e.* `dataset[0]["image"]` should **always** be preferred over `dataset["image"][0]`.
114
+ - `depth_map`: A `PIL.Image.Image` object containing the annotation depth map.
115
+
116
+ ### Data Splits
117
+
118
+ The data is split into training, and validation splits. The training data contains 47584 images, and the validation data contains 654 images.
119
+
120
+ ## Visualization
121
+
122
+ You can use the following code snippet to visualize samples from the dataset:
123
+
124
+ ```py
125
+ from datasets import load_dataset
126
+ import numpy as np
127
+ import matplotlib.pyplot as plt
128
+
129
+
130
+ cmap = plt.cm.viridis
131
+
132
+ ds = load_dataset("sayakpaul/nyu_depth_v2")
133
+
134
+
135
+ def colored_depthmap(depth, d_min=None, d_max=None):
136
+ if d_min is None:
137
+ d_min = np.min(depth)
138
+ if d_max is None:
139
+ d_max = np.max(depth)
140
+ depth_relative = (depth - d_min) / (d_max - d_min)
141
+ return 255 * cmap(depth_relative)[:,:,:3] # H, W, C
142
+
143
+
144
+ def merge_into_row(input, depth_target):
145
+ input = np.array(input)
146
+ depth_target = np.squeeze(np.array(depth_target))
147
+
148
+ d_min = np.min(depth_target)
149
+ d_max = np.max(depth_target)
150
+ depth_target_col = colored_depthmap(depth_target, d_min, d_max)
151
+ img_merge = np.hstack([input, depth_target_col])
152
+
153
+ return img_merge
154
+
155
+
156
+ random_indices = np.random.choice(len(ds["train"]), 9).tolist()
157
+ train_set = ds["train"]
158
+
159
+ plt.figure(figsize=(15, 6))
160
+
161
+ for i, idx in enumerate(random_indices):
162
+ ax = plt.subplot(3, 3, i + 1)
163
+ image_viz = merge_into_row(
164
+ train_set[idx]["image"], train_set[idx]["depth_map"]
165
+ )
166
+ plt.imshow(image_viz.astype("uint8"))
167
+ plt.axis("off")
168
+ ```
169
+
170
+ ## Dataset Creation
171
+
172
+ ### Curation Rationale
173
+
174
+ The rationale from [the paper](http://cs.nyu.edu/~silberman/papers/indoor_seg_support.pdf) that introduced the NYU Depth V2 dataset:
175
+
176
+ > We present an approach to interpret the major surfaces, objects, and support relations of an indoor scene from an RGBD image. Most existing work ignores physical interactions or is applied only to tidy rooms and hallways. Our goal is to parse typical, often messy, indoor scenes into floor, walls, supporting surfaces, and object regions, and to recover support relationships. One of our main interests is to better understand how 3D cues can best inform a structured 3D interpretation.
177
+
178
+ ### Source Data
179
+
180
+ #### Initial Data Collection
181
+
182
+ > The dataset consists of 1449 RGBD images, gathered from a wide range
183
+ of commercial and residential buildings in three different US cities, comprising
184
+ 464 different indoor scenes across 26 scene classes.A dense per-pixel labeling was
185
+ obtained for each image using Amazon Mechanical Turk.
186
+
187
+ ### Annotations
188
+
189
+ #### Annotation process
190
+
191
+ This is an involved process. Interested readers are referred to Sections 2, 3, and 4 of the [original paper](http://cs.nyu.edu/~silberman/papers/indoor_seg_support.pdf).
192
+
193
+ #### Who are the annotators?
194
+
195
+ AMT annotators.
196
+
197
+ ### Personal and Sensitive Information
198
+
199
+ [More Information Needed]
200
+
201
+ ## Considerations for Using the Data
202
+
203
+ ### Social Impact of Dataset
204
+
205
+ [More Information Needed]
206
+
207
+ ### Discussion of Biases
208
+
209
+ [More Information Needed]
210
+
211
+ ### Other Known Limitations
212
+
213
+ [More Information Needed]
214
+
215
+ ## Additional Information
216
+
217
+ ### Dataset Curators
218
+
219
+ * Original NYU Depth V2 dataset: Nathan Silberman, Derek Hoiem, Pushmeet Kohli, Rob Fergus
220
+ * Preprocessed version: Diana Wofk, Fangchang Ma, Tien-Ju Yang, Sertac Karaman, Vivienne Sze
221
+
222
+ ### Licensing Information
223
+
224
+ The preprocessed NYU Depth V2 dataset is licensed under a [MIT License](https://github.com/dwofk/fast-depth/blob/master/LICENSE).
225
+
226
+ ### Citation Information
227
+
228
+ ```bibtex
229
+ @inproceedings{Silberman:ECCV12,
230
+ author = {Nathan Silberman, Derek Hoiem, Pushmeet Kohli and Rob Fergus},
231
+ title = {Indoor Segmentation and Support Inference from RGBD Images},
232
+ booktitle = {ECCV},
233
+ year = {2012}
234
+ }
235
+
236
+ @inproceedings{icra_2019_fastdepth,
237
+ author = {{Wofk, Diana and Ma, Fangchang and Yang, Tien-Ju and Karaman, Sertac and Sze, Vivienne}},
238
+ title = {{FastDepth: Fast Monocular Depth Estimation on Embedded Systems}},
239
+ booktitle = {{IEEE International Conference on Robotics and Automation (ICRA)}},
240
+ year = {{2019}}
241
+ }
242
+ ```
243
+
244
+ ### Contributions
245
+
246
+ Thanks to [@sayakpaul](https://huggingface.co/sayakpaul) for adding this dataset.
data/train-000000.tar ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9a6a59407fa909a8ae70ffa8495893ef818bf6f349d0d2e486dd10333ddf256a
3
+ size 3003340800
data/train-000001.tar ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:717ea0c58afdd747dfaf73e592f87fcf0261c62c39a5c2b39dd10f67952ca1e3
3
+ size 3003658240
data/train-000002.tar ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2530c35ecb55e64bb7fdfae9e1ddddfaa339555e37dc19d74d98048c896ec409
3
+ size 3003586560
data/train-000003.tar ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ad08e48ffffda394095e7bb9785b5cc89fe50421486bb784dd086639c65a5099
3
+ size 3002982400
data/train-000004.tar ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:942896ecb705f3f791d1d313121f8700a192e54dfbf091312cacbd6adbeb5563
3
+ size 3003443200
data/train-000005.tar ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c6363a837492be5325a1d8e2989fa2947f588949b0f5a9818bb65504907cad64
3
+ size 3003064320
data/train-000006.tar ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:77500df4b84d8af2aa87ba503a767ce0a8a693bfd32486d9b688b66cdfb172b4
3
+ size 3002992640
data/train-000007.tar ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b1aa7e0787d6fe9b160de213815566a40288da40106be8d5099e301936e347d1
3
+ size 3003289600
data/train-000008.tar ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ef9fa6bfeee9a8b150cd28837261e879ed1075c4b11d06c31792af0079861610
3
+ size 3003443200
data/train-000009.tar ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8cd051a16f763a0c40f3a5c7e5d720d6eb6566fb9553bd58ddeb9ae7923cad39
3
+ size 3003064320
data/train-000010.tar ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:602109f56d08318ec20e496fc161c5ba1b5a25a506b048a54f5778e19194e0a6
3
+ size 3003217920
data/train-000011.tar ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4ef8723b66e1c09010f8c5305b5c7b9c526d455cd943aeac0280556fc3c5ede3
3
+ size 1098700800
data/val-000000.tar ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a75d57458afe063e6d9158a6cb3f41eabd859699f46043b0b8def2e2995049bb
3
+ size 1001553920
data/val-000001.tar ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:92dc10ad1b799fb810011fcf5e85b017f949baae919903d66612e32d37e40bf2
3
+ size 14786560
nyu_depth_v2.py ADDED
@@ -0,0 +1,113 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright 2022 The HuggingFace Datasets Authors and the current dataset script contributor.
2
+ #
3
+ # Licensed under the Apache License, Version 2.0 (the "License");
4
+ # you may not use this file except in compliance with the License.
5
+ # You may obtain a copy of the License at
6
+ #
7
+ # http://www.apache.org/licenses/LICENSE-2.0
8
+ #
9
+ # Unless required by applicable law or agreed to in writing, software
10
+ # distributed under the License is distributed on an "AS IS" BASIS,
11
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12
+ # See the License for the specific language governing permissions and
13
+ # limitations under the License.
14
+ """NYU-Depth V2."""
15
+
16
+
17
+ import io
18
+
19
+ import datasets
20
+ import h5py
21
+ import numpy as np
22
+
23
+ _CITATION = """\
24
+ @inproceedings{Silberman:ECCV12,
25
+ author = {Nathan Silberman, Derek Hoiem, Pushmeet Kohli and Rob Fergus},
26
+ title = {Indoor Segmentation and Support Inference from RGBD Images},
27
+ booktitle = {ECCV},
28
+ year = {2012}
29
+ }
30
+ @inproceedings{icra_2019_fastdepth,
31
+ author = {Wofk, Diana and Ma, Fangchang and Yang, Tien-Ju and Karaman, Sertac and Sze, Vivienne},
32
+ title = {FastDepth: Fast Monocular Depth Estimation on Embedded Systems},
33
+ booktitle = {IEEE International Conference on Robotics and Automation (ICRA)},
34
+ year = {2019}
35
+ }
36
+ """
37
+
38
+ _DESCRIPTION = """\
39
+ The NYU-Depth V2 data set is comprised of video sequences from a variety of indoor scenes as recorded by both the RGB and Depth cameras from the Microsoft Kinect.
40
+ """
41
+
42
+ _HOMEPAGE = "https://cs.nyu.edu/~silberman/datasets/nyu_depth_v2.html"
43
+
44
+ _LICENSE = "Apace 2.0 License"
45
+
46
+ _URLS = {
47
+ "train": [f"data/train-{i:06d}.tar" for i in range(12)],
48
+ "val": [f"data/val-{i:06d}.tar" for i in range(2)],
49
+ }
50
+
51
+ _IMG_EXTENSIONS = [".h5"]
52
+
53
+
54
+ class NYUDepthV2(datasets.GeneratorBasedBuilder):
55
+ """NYU-Depth V2 dataset."""
56
+
57
+ VERSION = datasets.Version("1.0.0")
58
+
59
+ def _info(self):
60
+ features = datasets.Features(
61
+ {"image": datasets.Image(), "depth_map": datasets.Image()}
62
+ )
63
+ return datasets.DatasetInfo(
64
+ description=_DESCRIPTION,
65
+ features=features,
66
+ homepage=_HOMEPAGE,
67
+ license=_LICENSE,
68
+ citation=_CITATION,
69
+ )
70
+
71
+ def _is_image_file(self, filename):
72
+ # Reference: https://github.com/dwofk/fast-depth/blob/master/dataloaders/dataloader.py#L21-L23
73
+ return any(filename.endswith(extension) for extension in _IMG_EXTENSIONS)
74
+
75
+ def _h5_loader(self, bytes_stream):
76
+ # Reference: https://github.com/dwofk/fast-depth/blob/master/dataloaders/dataloader.py#L8-L13
77
+ f = io.BytesIO(bytes_stream)
78
+ h5f = h5py.File(f, "r")
79
+ rgb = np.array(h5f["rgb"])
80
+ rgb = np.transpose(rgb, (1, 2, 0))
81
+ depth = np.array(h5f["depth"])
82
+ return rgb, depth
83
+
84
+ def _split_generators(self, dl_manager):
85
+ archives = dl_manager.download(_URLS)
86
+
87
+ return [
88
+ datasets.SplitGenerator(
89
+ name=datasets.Split.TRAIN,
90
+ gen_kwargs={
91
+ "archives": [
92
+ dl_manager.iter_archive(archive) for archive in archives["train"]
93
+ ]
94
+ },
95
+ ),
96
+ datasets.SplitGenerator(
97
+ name=datasets.Split.VALIDATION,
98
+ gen_kwargs={
99
+ "archives": [
100
+ dl_manager.iter_archive(archive) for archive in archives["val"]
101
+ ]
102
+ },
103
+ ),
104
+ ]
105
+
106
+ def _generate_examples(self, archives):
107
+ idx = 0
108
+ for archive in archives:
109
+ for path, file in archive:
110
+ if self._is_image_file(path):
111
+ image, depth = self._h5_loader(file.read())
112
+ yield idx, {"image": image, "depth_map": depth}
113
+ idx += 1