bike025 parquet-converter commited on
Commit
bde5b44
·
verified ·
0 Parent(s):

Duplicate from ETHZurich/biwi_kinect_head_pose

Browse files

Co-authored-by: Parquet-converter (BOT) <parquet-converter@users.noreply.huggingface.co>

Files changed (3) hide show
  1. .gitattributes +37 -0
  2. README.md +381 -0
  3. biwi_kinect_head_pose.py +215 -0
.gitattributes ADDED
@@ -0,0 +1,37 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
5
+ *.ftz filter=lfs diff=lfs merge=lfs -text
6
+ *.gz filter=lfs diff=lfs merge=lfs -text
7
+ *.h5 filter=lfs diff=lfs merge=lfs -text
8
+ *.joblib filter=lfs diff=lfs merge=lfs -text
9
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
10
+ *.model filter=lfs diff=lfs merge=lfs -text
11
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
12
+ *.onnx filter=lfs diff=lfs merge=lfs -text
13
+ *.ot filter=lfs diff=lfs merge=lfs -text
14
+ *.parquet filter=lfs diff=lfs merge=lfs -text
15
+ *.pb filter=lfs diff=lfs merge=lfs -text
16
+ *.pt filter=lfs diff=lfs merge=lfs -text
17
+ *.pth filter=lfs diff=lfs merge=lfs -text
18
+ *.rar filter=lfs diff=lfs merge=lfs -text
19
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
20
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
21
+ *.tflite filter=lfs diff=lfs merge=lfs -text
22
+ *.tgz filter=lfs diff=lfs merge=lfs -text
23
+ *.wasm filter=lfs diff=lfs merge=lfs -text
24
+ *.xz filter=lfs diff=lfs merge=lfs -text
25
+ *.zip filter=lfs diff=lfs merge=lfs -text
26
+ *.zstandard filter=lfs diff=lfs merge=lfs -text
27
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
28
+ # Audio files - uncompressed
29
+ *.pcm filter=lfs diff=lfs merge=lfs -text
30
+ *.sam filter=lfs diff=lfs merge=lfs -text
31
+ *.raw filter=lfs diff=lfs merge=lfs -text
32
+ # Audio files - compressed
33
+ *.aac filter=lfs diff=lfs merge=lfs -text
34
+ *.flac filter=lfs diff=lfs merge=lfs -text
35
+ *.mp3 filter=lfs diff=lfs merge=lfs -text
36
+ *.ogg filter=lfs diff=lfs merge=lfs -text
37
+ *.wav filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,381 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ annotations_creators:
3
+ - expert-generated
4
+ language_creators:
5
+ - expert-generated
6
+ language:
7
+ - en
8
+ license:
9
+ - other
10
+ multilinguality:
11
+ - monolingual
12
+ pretty_name: Biwi Kinect Head Pose Database
13
+ size_categories:
14
+ - 10K<n<100K
15
+ source_datasets:
16
+ - original
17
+ task_categories:
18
+ - other
19
+ task_ids: []
20
+ paperswithcode_id: biwi
21
+ tags:
22
+ - head-pose-estimation
23
+ dataset_info:
24
+ features:
25
+ - name: sequence_number
26
+ dtype: string
27
+ - name: subject_id
28
+ dtype: string
29
+ - name: rgb
30
+ sequence: image
31
+ - name: rgb_cal
32
+ struct:
33
+ - name: intrisic_mat
34
+ dtype:
35
+ array2_d:
36
+ shape:
37
+ - 3
38
+ - 3
39
+ dtype: float64
40
+ - name: extrinsic_mat
41
+ struct:
42
+ - name: rotation
43
+ dtype:
44
+ array2_d:
45
+ shape:
46
+ - 3
47
+ - 3
48
+ dtype: float64
49
+ - name: translation
50
+ sequence: float64
51
+ length: 3
52
+ - name: depth
53
+ sequence: string
54
+ - name: depth_cal
55
+ struct:
56
+ - name: intrisic_mat
57
+ dtype:
58
+ array2_d:
59
+ shape:
60
+ - 3
61
+ - 3
62
+ dtype: float64
63
+ - name: extrinsic_mat
64
+ struct:
65
+ - name: rotation
66
+ dtype:
67
+ array2_d:
68
+ shape:
69
+ - 3
70
+ - 3
71
+ dtype: float64
72
+ - name: translation
73
+ sequence: float64
74
+ length: 3
75
+ - name: head_pose_gt
76
+ sequence:
77
+ - name: center
78
+ sequence: float64
79
+ length: 3
80
+ - name: rotation
81
+ dtype:
82
+ array2_d:
83
+ shape:
84
+ - 3
85
+ - 3
86
+ dtype: float64
87
+ - name: head_template
88
+ dtype: string
89
+ splits:
90
+ - name: train
91
+ num_bytes: 6914063
92
+ num_examples: 24
93
+ download_size: 6014398431
94
+ dataset_size: 6914063
95
+ ---
96
+
97
+ # Dataset Card for Biwi Kinect Head Pose Database
98
+
99
+ ## Table of Contents
100
+ - [Dataset Description](#dataset-description)
101
+ - [Dataset Summary](#dataset-summary)
102
+ - [Supported Tasks](#supported-tasks-and-leaderboards)
103
+ - [Languages](#languages)
104
+ - [Dataset Structure](#dataset-structure)
105
+ - [Data Instances](#data-instances)
106
+ - [Data Fields](#data-instances)
107
+ - [Data Splits](#data-instances)
108
+ - [Dataset Creation](#dataset-creation)
109
+ - [Curation Rationale](#curation-rationale)
110
+ - [Source Data](#source-data)
111
+ - [Annotations](#annotations)
112
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
113
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
114
+ - [Social Impact of Dataset](#social-impact-of-dataset)
115
+ - [Discussion of Biases](#discussion-of-biases)
116
+ - [Other Known Limitations](#other-known-limitations)
117
+ - [Additional Information](#additional-information)
118
+ - [Dataset Curators](#dataset-curators)
119
+ - [Licensing Information](#licensing-information)
120
+ - [Citation Information](#citation-information)
121
+
122
+ ## Dataset Description
123
+
124
+ - **Homepage:** [Biwi Kinect Head Pose homepage](https://icu.ee.ethz.ch/research/datsets.html)
125
+ - **Repository:** [Needs More Information]
126
+ - **Paper:** [Biwi Kinect Head Pose paper](https://link.springer.com/article/10.1007/s11263-012-0549-0)
127
+ - **Leaderboard:** [Needs More Information]
128
+ - **Point of Contact:** [Gabriele Fanelli](mailto:gabriele.fanelli@gmail.com)
129
+
130
+ ### Dataset Summary
131
+
132
+ The Biwi Kinect Head Pose Database is acquired with the Microsoft Kinect sensor, a structured IR light device.It contains 15K images of 20 people with 6 females and 14 males where 4 people were recorded twice.
133
+
134
+ For each frame, there is :
135
+ - a depth image,
136
+ - a corresponding rgb image (both 640x480 pixels),
137
+ - annotation
138
+
139
+ The head pose range covers about +-75 degrees yaw and +-60 degrees pitch. The ground truth is the 3D location of the head and its rotation.
140
+
141
+ ### Data Processing
142
+
143
+ Example code for reading a compressed binary depth image file provided by the authors.
144
+
145
+ <details>
146
+ <summary> View C++ Code </summary>
147
+
148
+ ```cpp
149
+ /*
150
+ * Gabriele Fanelli
151
+ *
152
+ * fanelli@vision.ee.ethz.ch
153
+ *
154
+ * BIWI, ETHZ, 2011
155
+ *
156
+ * Part of the Biwi Kinect Head Pose Database
157
+ *
158
+ * Example code for reading a compressed binary depth image file.
159
+ *
160
+ * THE SOFTWARE IS PROVIDED “AS IS” AND THE PROVIDER GIVES NO EXPRESS OR IMPLIED WARRANTIES OF ANY KIND,
161
+ * INCLUDING WITHOUT LIMITATION THE WARRANTIES OF FITNESS FOR ANY PARTICULAR PURPOSE AND NON-INFRINGEMENT.
162
+ * IN NO EVENT SHALL THE PROVIDER BE HELD RESPONSIBLE FOR LOSS OR DAMAGE CAUSED BY THE USE OF THE SOFTWARE.
163
+ *
164
+ *
165
+ */
166
+
167
+ #include <iostream>
168
+ #include <fstream>
169
+ #include <cstdlib>
170
+
171
+ int16_t* loadDepthImageCompressed( const char* fname ){
172
+
173
+ //now read the depth image
174
+ FILE* pFile = fopen(fname, "rb");
175
+ if(!pFile){
176
+ std::cerr << "could not open file " << fname << std::endl;
177
+ return NULL;
178
+ }
179
+
180
+ int im_width = 0;
181
+ int im_height = 0;
182
+ bool success = true;
183
+
184
+ success &= ( fread(&im_width,sizeof(int),1,pFile) == 1 ); // read width of depthmap
185
+ success &= ( fread(&im_height,sizeof(int),1,pFile) == 1 ); // read height of depthmap
186
+
187
+ int16_t* depth_img = new int16_t[im_width*im_height];
188
+
189
+ int numempty;
190
+ int numfull;
191
+ int p = 0;
192
+
193
+ while(p < im_width*im_height ){
194
+
195
+ success &= ( fread( &numempty,sizeof(int),1,pFile) == 1 );
196
+
197
+ for(int i = 0; i < numempty; i++)
198
+ depth_img[ p + i ] = 0;
199
+
200
+ success &= ( fread( &numfull,sizeof(int), 1, pFile) == 1 );
201
+ success &= ( fread( &depth_img[ p + numempty ], sizeof(int16_t), numfull, pFile) == (unsigned int) numfull );
202
+ p += numempty+numfull;
203
+
204
+ }
205
+
206
+ fclose(pFile);
207
+
208
+ if(success)
209
+ return depth_img;
210
+ else{
211
+ delete [] depth_img;
212
+ return NULL;
213
+ }
214
+ }
215
+
216
+ float* read_gt(const char* fname){
217
+
218
+ //try to read in the ground truth from a binary file
219
+ FILE* pFile = fopen(fname, "rb");
220
+ if(!pFile){
221
+ std::cerr << "could not open file " << fname << std::endl;
222
+ return NULL;
223
+ }
224
+
225
+ float* data = new float[6];
226
+
227
+ bool success = true;
228
+ success &= ( fread( &data[0], sizeof(float), 6, pFile) == 6 );
229
+ fclose(pFile);
230
+
231
+ if(success)
232
+ return data;
233
+ else{
234
+ delete [] data;
235
+ return NULL;
236
+ }
237
+
238
+ }
239
+ ```
240
+
241
+ </details>
242
+
243
+
244
+ ### Supported Tasks and Leaderboards
245
+
246
+ Biwi Kinect Head Pose Database supports the following tasks :
247
+ - Head pose estimation
248
+ - Pose estimation
249
+ - Face verification
250
+
251
+ ### Languages
252
+
253
+ [Needs More Information]
254
+
255
+ ## Dataset Structure
256
+
257
+ ### Data Instances
258
+
259
+ A sample from the Biwi Kinect Head Pose dataset is provided below:
260
+
261
+ ```
262
+ {
263
+ 'sequence_number': '12',
264
+ 'subject_id': 'M06',
265
+ 'rgb': [<PIL.PngImagePlugin.PngImageFile image mode=RGB size=640x480 at 0x7F53A6446C10>,.....],
266
+ 'rgb_cal':
267
+ {
268
+ 'intrisic_mat': [[517.679, 0.0, 320.0], [0.0, 517.679, 240.5], [0.0, 0.0, 1.0]],
269
+ 'extrinsic_mat':
270
+ {
271
+ 'rotation': [[0.999947, 0.00432361, 0.00929419], [-0.00446314, 0.999877, 0.0150443], [-0.009228, -0.015085, 0.999844]],
272
+ 'translation': [-24.0198, 5.8896, -13.2308]
273
+ }
274
+ }
275
+ 'depth': ['../hpdb/12/frame_00003_depth.bin', .....],
276
+ 'depth_cal':
277
+ {
278
+ 'intrisic_mat': [[575.816, 0.0, 320.0], [0.0, 575.816, 240.0], [0.0, 0.0, 1.0]],
279
+ 'extrinsic_mat':
280
+ {
281
+ 'rotation': [[1.0, 0.0, 0.0], [0.0, 1.0, 0.0], [0.0, 0.0, 1.0]],
282
+ 'translation': [0.0, 0.0, 0.0]
283
+ }
284
+ }
285
+ 'head_pose_gt':
286
+ {
287
+ 'center': [[43.4019, -30.7038, 906.864], [43.0202, -30.8683, 906.94], [43.0255, -30.5611, 906.659], .....],
288
+ 'rotation': [[[0.980639, 0.109899, 0.162077], [-0.11023, 0.993882, -0.00697376], [-0.161851, -0.011027, 0.986754]], ......]
289
+ }
290
+ }
291
+ ```
292
+
293
+ ### Data Fields
294
+
295
+ - `sequence_number` : This refers to the sequence number in the dataset. There are a total of 24 sequences.
296
+ - `subject_id` : This refers to the subjects in the dataset. There are a total of 20 people with 6 females and 14 males where 4 people were recorded twice.
297
+ - `rgb` : List of png frames containing the poses.
298
+ - `rgb_cal`: Contains calibration information for the color camera which includes intrinsic matrix,
299
+ global rotation and translation.
300
+ - `depth` : List of depth frames for the poses.
301
+ - `depth_cal`: Contains calibration information for the depth camera which includes intrinsic matrix, global rotation and translation.
302
+ - `head_pose_gt` : Contains ground truth information, i.e., the location of the center of the head in 3D and the head rotation, encoded as a 3x3 rotation matrix.
303
+
304
+
305
+ ### Data Splits
306
+
307
+ All the data is contained in the training set.
308
+
309
+ ## Dataset Creation
310
+
311
+ ### Curation Rationale
312
+
313
+ [More Information Needed]
314
+
315
+ ### Source Data
316
+
317
+ #### Initial Data Collection and Normalization
318
+
319
+ The Biwi Kinect Head Pose Database is acquired with the Microsoft Kinect sensor, a structured IR light device.
320
+ #### Who are the source language producers?
321
+
322
+ [More Information Needed]
323
+
324
+ ### Annotations
325
+
326
+ #### Annotation process
327
+
328
+ From Dataset's README :
329
+ > The database contains 24 sequences acquired with a Kinect sensor. 20 people (some were recorded twice - 6 women and 14 men) were recorded while turning their heads, sitting in front of the sensor, at roughly one meter of distance.
330
+
331
+ #### Who are the annotators?
332
+
333
+ [More Information Needed]
334
+
335
+ ### Personal and Sensitive Information
336
+
337
+ [More Information Needed]
338
+
339
+ ## Considerations for Using the Data
340
+
341
+ ### Social Impact of Dataset
342
+
343
+ [More Information Needed]
344
+
345
+ ### Discussion of Biases
346
+
347
+ [More Information Needed]
348
+
349
+ ### Other Known Limitations
350
+
351
+ [More Information Needed]
352
+
353
+ ## Additional Information
354
+
355
+ ### Dataset Curators
356
+
357
+ [Needs More Information]
358
+
359
+ ### Licensing Information
360
+
361
+ From Dataset's README :
362
+ > This database is made available for non-commercial use such as university research and education.
363
+
364
+ ### Citation Information
365
+
366
+ ```bibtex
367
+ @article{fanelli_IJCV,
368
+ author = {Fanelli, Gabriele and Dantone, Matthias and Gall, Juergen and Fossati, Andrea and Van Gool, Luc},
369
+ title = {Random Forests for Real Time 3D Face Analysis},
370
+ journal = {Int. J. Comput. Vision},
371
+ year = {2013},
372
+ month = {February},
373
+ volume = {101},
374
+ number = {3},
375
+ pages = {437--458}
376
+ }
377
+ ```
378
+
379
+ ### Contributions
380
+
381
+ Thanks to [@dnaveenr](https://github.com/dnaveenr) for adding this dataset.
biwi_kinect_head_pose.py ADDED
@@ -0,0 +1,215 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright 2022 The HuggingFace Datasets Authors and the current dataset script contributor.
2
+ #
3
+ # Licensed under the Apache License, Version 2.0 (the "License");
4
+ # you may not use this file except in compliance with the License.
5
+ # You may obtain a copy of the License at
6
+ #
7
+ # http://www.apache.org/licenses/LICENSE-2.0
8
+ #
9
+ # Unless required by applicable law or agreed to in writing, software
10
+ # distributed under the License is distributed on an "AS IS" BASIS,
11
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12
+ # See the License for the specific language governing permissions and
13
+ # limitations under the License.
14
+
15
+ """Biwi Kinect Head Pose Database."""
16
+
17
+
18
+ import glob
19
+ import os
20
+
21
+ import datasets
22
+
23
+
24
+ _CITATION = """\
25
+ @article{fanelli_IJCV,
26
+ author = {Fanelli, Gabriele and Dantone, Matthias and Gall, Juergen and Fossati, Andrea and Van Gool, Luc},
27
+ title = {Random Forests for Real Time 3D Face Analysis},
28
+ journal = {Int. J. Comput. Vision},
29
+ year = {2013},
30
+ month = {February},
31
+ volume = {101},
32
+ number = {3},
33
+ pages = {437--458}
34
+ }
35
+ """
36
+
37
+
38
+ _DESCRIPTION = """\
39
+ The Biwi Kinect Head Pose Database is acquired with the Microsoft Kinect sensor, a structured IR light device.It contains 15K images of 20 people with 6 females and 14 males where 4 people were recorded twice.
40
+ """
41
+
42
+
43
+ _HOMEPAGE = "https://icu.ee.ethz.ch/research/datsets.html"
44
+
45
+
46
+ _LICENSE = "This database is made available for non-commercial use such as university research and education."
47
+
48
+
49
+ _URLS = {
50
+ "kinect_head_pose_db": "https://data.vision.ee.ethz.ch/cvl/gfanelli/kinect_head_pose_db.tgz",
51
+ }
52
+
53
+ _sequence_to_subject_map = {
54
+ "01": "F01",
55
+ "02": "F02",
56
+ "03": "F03",
57
+ "04": "F04",
58
+ "05": "F05",
59
+ "06": "F06",
60
+ "07": "M01",
61
+ "08": "M02",
62
+ "09": "M03",
63
+ "10": "M04",
64
+ "11": "M05",
65
+ "12": "M06",
66
+ "13": "M07",
67
+ "14": "M08",
68
+ "15": "F03",
69
+ "16": "M09",
70
+ "17": "M10",
71
+ "18": "F05",
72
+ "19": "M11",
73
+ "20": "M12",
74
+ "21": "F02",
75
+ "22": "M01",
76
+ "23": "M13",
77
+ "24": "M14",
78
+ }
79
+
80
+
81
+ class BiwiKinectHeadPose(datasets.GeneratorBasedBuilder):
82
+
83
+ VERSION = datasets.Version("1.0.0")
84
+
85
+ def _info(self):
86
+ return datasets.DatasetInfo(
87
+ description=_DESCRIPTION,
88
+ features=datasets.Features(
89
+ {
90
+ "sequence_number": datasets.Value("string"),
91
+ "subject_id": datasets.Value("string"),
92
+ "rgb": datasets.Sequence(datasets.Image()),
93
+ "rgb_cal": {
94
+ "intrisic_mat": datasets.Array2D(shape=(3, 3), dtype="float64"),
95
+ "extrinsic_mat": {
96
+ "rotation": datasets.Array2D(shape=(3, 3), dtype="float64"),
97
+ "translation": datasets.Sequence(datasets.Value("float64"), length=3),
98
+ },
99
+ },
100
+ "depth": datasets.Sequence(datasets.Value("string")),
101
+ "depth_cal": {
102
+ "intrisic_mat": datasets.Array2D(shape=(3, 3), dtype="float64"),
103
+ "extrinsic_mat": {
104
+ "rotation": datasets.Array2D(shape=(3, 3), dtype="float64"),
105
+ "translation": datasets.Sequence(datasets.Value("float64"), length=3),
106
+ },
107
+ },
108
+ "head_pose_gt": datasets.Sequence(
109
+ {
110
+ "center": datasets.Sequence(datasets.Value("float64"), length=3),
111
+ "rotation": datasets.Array2D(shape=(3, 3), dtype="float64"),
112
+ }
113
+ ),
114
+ "head_template": datasets.Value("string"),
115
+ }
116
+ ),
117
+ homepage=_HOMEPAGE,
118
+ license=_LICENSE,
119
+ citation=_CITATION,
120
+ )
121
+
122
+ def _split_generators(self, dl_manager):
123
+
124
+ data_dir = dl_manager.download_and_extract(_URLS)
125
+ return [
126
+ datasets.SplitGenerator(
127
+ name=datasets.Split.TRAIN,
128
+ gen_kwargs={
129
+ "dataset_path": os.path.join(data_dir["kinect_head_pose_db"], "hpdb"),
130
+ },
131
+ ),
132
+ ]
133
+
134
+ @staticmethod
135
+ def _get_calibration_information(cal_file_path):
136
+ with open(cal_file_path, "r", encoding="utf-8") as f:
137
+ cal_info = f.read().splitlines()
138
+
139
+ intrisic_mat = []
140
+ extrinsic_mat = []
141
+
142
+ for data in cal_info[:3]:
143
+ row = list(map(float, data.strip().split(" ")))
144
+ intrisic_mat.append(row)
145
+
146
+ for data in cal_info[6:9]:
147
+ row = list(map(float, data.strip().split(" ")))
148
+ extrinsic_mat.append(row)
149
+
150
+ translation = list(map(float, cal_info[10].strip().split(" ")))
151
+
152
+ return {
153
+ "intrisic_mat": intrisic_mat,
154
+ "extrinsic_mat": {
155
+ "rotation": extrinsic_mat,
156
+ "translation": translation,
157
+ },
158
+ }
159
+
160
+ @staticmethod
161
+ def _parse_head_pose_info(head_pose_file):
162
+ with open(head_pose_file, "r", encoding="utf-8") as f:
163
+ head_pose_info = f.read().splitlines()
164
+
165
+ rotation = []
166
+ for data in head_pose_info[:3]:
167
+ row = list(map(float, data.strip().split(" ")))
168
+ rotation.append(row)
169
+
170
+ center = list(map(float, head_pose_info[4].strip().split(" ")))
171
+
172
+ return {
173
+ "center": center,
174
+ "rotation": rotation,
175
+ }
176
+
177
+ @staticmethod
178
+ def _get_head_pose_information(path):
179
+ head_pose_files = sorted(glob.glob(os.path.join(path, "*_pose.txt")))
180
+
181
+ head_poses_info = []
182
+
183
+ for head_pose_file in head_pose_files:
184
+ head_pose = BiwiKinectHeadPose._parse_head_pose_info(head_pose_file)
185
+ head_poses_info.append(head_pose)
186
+
187
+ return head_poses_info
188
+
189
+ def _generate_examples(self, dataset_path):
190
+
191
+ idx = 0
192
+ folders = os.listdir(dataset_path)
193
+ for item in folders:
194
+ sequence_number = item
195
+ sequence_base_path = os.path.join(dataset_path, sequence_number)
196
+ if os.path.isdir(sequence_base_path):
197
+ rgb_files = sorted(glob.glob(os.path.join(sequence_base_path, "*.png")))
198
+ depth_files = sorted(glob.glob(os.path.join(sequence_base_path, "*.bin")))
199
+ head_template_path = os.path.join(dataset_path, sequence_number + ".obj")
200
+ rgb_cal = self._get_calibration_information(os.path.join(sequence_base_path, "rgb.cal"))
201
+ depth_cal = self._get_calibration_information(os.path.join(sequence_base_path, "depth.cal"))
202
+ head_pose_gt = self._get_head_pose_information(sequence_base_path)
203
+
204
+ yield idx, {
205
+ "sequence_number": sequence_number,
206
+ "subject_id": _sequence_to_subject_map[sequence_number],
207
+ "rgb": rgb_files,
208
+ "rgb_cal": rgb_cal,
209
+ "depth": depth_files,
210
+ "depth_cal": depth_cal,
211
+ "head_pose_gt": head_pose_gt,
212
+ "head_template": head_template_path,
213
+ }
214
+
215
+ idx += 1