simplexsigil2 commited on
Commit
7846a46
·
verified ·
1 Parent(s): 2e5f8d7

Upload folder using huggingface_hub

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. .gitattributes +2 -0
  2. .gitignore +0 -2
  3. README.md +1990 -50
  4. generate_parquet.py +428 -0
  5. omnifall_builder.py +1192 -0
  6. parquet/OOPS/test-00000-of-00001.parquet +3 -0
  7. parquet/OOPS/train-00000-of-00001.parquet +3 -0
  8. parquet/OOPS/validation-00000-of-00001.parquet +3 -0
  9. parquet/caucafall/test-00000-of-00001.parquet +3 -0
  10. parquet/caucafall/train-00000-of-00001.parquet +3 -0
  11. parquet/caucafall/validation-00000-of-00001.parquet +3 -0
  12. parquet/cmdfall/test-00000-of-00001.parquet +3 -0
  13. parquet/cmdfall/train-00000-of-00001.parquet +3 -0
  14. parquet/cmdfall/validation-00000-of-00001.parquet +3 -0
  15. parquet/cs-staged-wild/test-00000-of-00001.parquet +3 -0
  16. parquet/cs-staged-wild/train-00000-of-00001.parquet +3 -0
  17. parquet/cs-staged-wild/validation-00000-of-00001.parquet +3 -0
  18. parquet/cs-staged/test-00000-of-00001.parquet +3 -0
  19. parquet/cs-staged/train-00000-of-00001.parquet +3 -0
  20. parquet/cs-staged/validation-00000-of-00001.parquet +3 -0
  21. parquet/cs/test-00000-of-00001.parquet +3 -0
  22. parquet/cs/train-00000-of-00001.parquet +3 -0
  23. parquet/cs/validation-00000-of-00001.parquet +3 -0
  24. parquet/cv-staged-wild/test-00000-of-00001.parquet +3 -0
  25. parquet/cv-staged-wild/train-00000-of-00001.parquet +3 -0
  26. parquet/cv-staged-wild/validation-00000-of-00001.parquet +3 -0
  27. parquet/cv-staged/test-00000-of-00001.parquet +3 -0
  28. parquet/cv-staged/train-00000-of-00001.parquet +3 -0
  29. parquet/cv-staged/validation-00000-of-00001.parquet +3 -0
  30. parquet/cv/test-00000-of-00001.parquet +3 -0
  31. parquet/cv/train-00000-of-00001.parquet +3 -0
  32. parquet/cv/validation-00000-of-00001.parquet +3 -0
  33. parquet/edf/test-00000-of-00001.parquet +3 -0
  34. parquet/edf/train-00000-of-00001.parquet +3 -0
  35. parquet/edf/validation-00000-of-00001.parquet +3 -0
  36. parquet/gmdcsa24/test-00000-of-00001.parquet +3 -0
  37. parquet/gmdcsa24/train-00000-of-00001.parquet +3 -0
  38. parquet/gmdcsa24/validation-00000-of-00001.parquet +3 -0
  39. parquet/labels-syn/train-00000-of-00001.parquet +3 -0
  40. parquet/labels/train-00000-of-00001.parquet +3 -0
  41. parquet/le2i/test-00000-of-00001.parquet +3 -0
  42. parquet/le2i/train-00000-of-00001.parquet +3 -0
  43. parquet/le2i/validation-00000-of-00001.parquet +3 -0
  44. parquet/mcfd/train-00000-of-00001.parquet +3 -0
  45. parquet/metadata-syn/train-00000-of-00001.parquet +3 -0
  46. parquet/occu/test-00000-of-00001.parquet +3 -0
  47. parquet/occu/train-00000-of-00001.parquet +3 -0
  48. parquet/occu/validation-00000-of-00001.parquet +3 -0
  49. parquet/of-itw/test-00000-of-00001.parquet +3 -0
  50. parquet/of-itw/train-00000-of-00001.parquet +3 -0
.gitattributes CHANGED
@@ -57,3 +57,5 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
57
  # Video files - compressed
58
  *.mp4 filter=lfs diff=lfs merge=lfs -text
59
  *.webm filter=lfs diff=lfs merge=lfs -text
 
 
 
57
  # Video files - compressed
58
  *.mp4 filter=lfs diff=lfs merge=lfs -text
59
  *.webm filter=lfs diff=lfs merge=lfs -text
60
+ *.ipynb filter=lfs diff=lfs merge=lfs -text
61
+ omnifall_dataset_examples.ipynb filter=lfs diff=lfs merge=lfs -text
.gitignore CHANGED
@@ -1,5 +1,3 @@
1
  convert_oops_via_to_csv.py
2
- # Symlink for local testing (HF derives dataset name from directory name)
3
  .claude
4
- hf.py
5
  __pycache__
 
1
  convert_oops_via_to_csv.py
 
2
  .claude
 
3
  __pycache__
README.md CHANGED
@@ -12,6 +12,1985 @@ tags:
12
  pretty_name: 'OmniFall: A Unified Benchmark for Staged-to-Wild Fall Detection'
13
  size_categories:
14
  - 10K<n<100K
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
15
  ---
16
  [![License: CC BY-NC-SA 4.0](https://img.shields.io/badge/License-CC%20BY--NC--SA%204.0-lightgrey.svg)](https://creativecommons.org/licenses/by-nc-sa/4.0/)
17
  </br>
@@ -64,7 +2043,8 @@ Also have a look for additional information on our project page:
64
 
65
  The repository is organized as follows:
66
 
67
- - `omnifall.py` - Custom HuggingFace dataset builder (handles all configs)
 
68
  - `labels/` - CSV files containing temporal segment annotations
69
  - Staged/OOPS labels: 7 columns (`path, label, start, end, subject, cam, dataset`)
70
  - OF-Syn labels: 19 columns (7 core + 12 demographic/scene metadata)
@@ -129,13 +2109,13 @@ path/to/clip
129
 
130
  ## Evaluation Protocols
131
 
132
- All configurations are defined in the `omnifall.py` dataset builder and loaded via `load_dataset("simplexsigil2/omnifall", "<config_name>")`.
133
 
134
  ### Labels (no train/val/test splits)
135
  - `labels` (default): All staged + OOPS labels (52k segments, 7 columns)
136
  - `labels-syn`: OF-Syn labels with demographic metadata (19k segments, 19 columns)
137
  - `metadata-syn`: OF-Syn video-level metadata (12k videos)
138
- - `framewise-syn`: OF-Syn frame-wise HDF5 labels (81 labels per video)
139
 
140
  ### OF-Staged Configs
141
  - `of-sta-cs`: 8 staged datasets, cross-subject splits
@@ -144,7 +2124,7 @@ All configurations are defined in the `omnifall.py` dataset builder and loaded v
144
  ### OF-ItW Config
145
  - `of-itw`: OOPS-Fall in-the-wild genuine accidents
146
 
147
- OF-ItW supports optional video loading via `include_video=True` with `oops_video_dir` (see examples below). Videos are not hosted here due to licensing; run `prepare_oops_videos.py` to download them from the [original OOPS source](https://oops.cs.columbia.edu/data/).
148
 
149
  ### OF-Syn Configs
150
  - `of-syn`: Fixed randomized 80/10/10 split
@@ -152,7 +2132,7 @@ OF-ItW supports optional video loading via `include_video=True` with `oops_video
152
  - `of-syn-cross-ethnicity`: Cross-ethnicity split
153
  - `of-syn-cross-bmi`: Cross-BMI split (train: normal/underweight, test: obese)
154
 
155
- All OF-Syn configs support optional video loading via `include_video=True` (see examples below).
156
 
157
  ### Cross-Domain Evaluation
158
  - `of-sta-itw-cs`: Train/val on staged CS, test on OOPS
@@ -180,7 +2160,7 @@ The following old config names still work but emit a deprecation warning:
180
 
181
  ## Examples
182
 
183
- For a complete interactive walkthrough of all configs, video loading, and label visualization, see the [example notebook](test_omnifall_dataset.ipynb).
184
 
185
  ```python
186
  from datasets import load_dataset
@@ -210,57 +2190,17 @@ labels = load_dataset("simplexsigil2/omnifall", "labels")["train"]
210
  syn_labels = load_dataset("simplexsigil2/omnifall", "labels-syn")["train"]
211
  ```
212
 
213
- ### Loading OF-Syn videos
214
 
215
- OF-Syn configs support `include_video=True` to download and include the video files (~9 GB download and disk space).
216
- By default, videos are returned as decoded `Video()` objects. Set `decode_video=False` to get file paths instead.
217
 
218
- ```python
219
- from datasets import load_dataset
220
-
221
- # Load with decoded video (HF Video() feature)
222
- ds = load_dataset("simplexsigil2/omnifall", "of-syn",
223
- include_video=True, trust_remote_code=True)
224
- sample = ds["train"][0]
225
- print(sample["video"]) # VideoReader object
226
-
227
- # Load with file paths only (faster, for custom decoding)
228
- ds = load_dataset("simplexsigil2/omnifall", "of-syn",
229
- include_video=True, decode_video=False, trust_remote_code=True)
230
- sample = ds["train"][0]
231
- print(sample["video"]) # "/path/to/cached/fall/fall_ch_001.mp4"
232
-
233
- # Cross-domain with video: train/val (syn) and test (itw) both have videos
234
- ds = load_dataset("simplexsigil2/omnifall", "of-syn-itw",
235
- include_video=True, decode_video=False,
236
- oops_video_dir="/path/to/oops_prepared",
237
- trust_remote_code=True)
238
- print(ds["train"][0]["video"]) # syn video path (auto-downloaded)
239
- print(ds["test"][0]["video"]) # itw video path (from oops_video_dir)
240
- ```
241
-
242
- ### Loading OF-ItW (OOPS) videos
243
-
244
- OOPS videos are not hosted in this repository due to licensing. To load OF-ItW with videos, first prepare the OOPS videos using the included script:
245
 
246
  ```bash
247
- # Step 1: Prepare OOPS videos (~45GB streamed from source, ~2.6GB disk space)
248
  python prepare_oops_videos.py --output_dir /path/to/oops_prepared
249
  ```
250
 
251
- ```python
252
- # Step 2: Load OF-ItW with videos
253
- from datasets import load_dataset
254
-
255
- ds = load_dataset("simplexsigil2/omnifall", "of-itw",
256
- include_video=True, decode_video=False,
257
- oops_video_dir="/path/to/oops_prepared",
258
- trust_remote_code=True)
259
- sample = ds["train"][0]
260
- print(sample["video"]) # "/path/to/oops_prepared/falls/BestFailsofWeek2July2016_FailArmy9.mp4"
261
- ```
262
-
263
- The preparation script streams the full [OOPS dataset](https://oops.cs.columbia.edu/data/) archive (~45GB download) from the original source and extracts only the 818 videos used in OF-ItW. The archive is streamed and never written to disk, so only ~2.6GB of disk space is needed for the extracted videos. If you already have the OOPS archive downloaded locally, pass it with `--oops_archive /path/to/video_and_anns.tar.gz`.
264
 
265
  ## Label definitions
266
 
 
12
  pretty_name: 'OmniFall: A Unified Benchmark for Staged-to-Wild Fall Detection'
13
  size_categories:
14
  - 10K<n<100K
15
+ configs:
16
+ - config_name: labels
17
+ data_files:
18
+ - split: train
19
+ path: parquet/labels/train-*.parquet
20
+ default: true
21
+ - config_name: labels-syn
22
+ data_files:
23
+ - split: train
24
+ path: parquet/labels-syn/train-*.parquet
25
+ - config_name: metadata-syn
26
+ data_files:
27
+ - split: train
28
+ path: parquet/metadata-syn/train-*.parquet
29
+ - config_name: of-sta-cs
30
+ data_files:
31
+ - split: train
32
+ path: parquet/of-sta-cs/train-*.parquet
33
+ - split: validation
34
+ path: parquet/of-sta-cs/validation-*.parquet
35
+ - split: test
36
+ path: parquet/of-sta-cs/test-*.parquet
37
+ - config_name: of-sta-cv
38
+ data_files:
39
+ - split: train
40
+ path: parquet/of-sta-cv/train-*.parquet
41
+ - split: validation
42
+ path: parquet/of-sta-cv/validation-*.parquet
43
+ - split: test
44
+ path: parquet/of-sta-cv/test-*.parquet
45
+ - config_name: of-itw
46
+ data_files:
47
+ - split: train
48
+ path: parquet/of-itw/train-*.parquet
49
+ - split: validation
50
+ path: parquet/of-itw/validation-*.parquet
51
+ - split: test
52
+ path: parquet/of-itw/test-*.parquet
53
+ - config_name: of-syn
54
+ data_files:
55
+ - split: train
56
+ path: parquet/of-syn/train-*.parquet
57
+ - split: validation
58
+ path: parquet/of-syn/validation-*.parquet
59
+ - split: test
60
+ path: parquet/of-syn/test-*.parquet
61
+ - config_name: of-syn-cross-age
62
+ data_files:
63
+ - split: train
64
+ path: parquet/of-syn-cross-age/train-*.parquet
65
+ - split: validation
66
+ path: parquet/of-syn-cross-age/validation-*.parquet
67
+ - split: test
68
+ path: parquet/of-syn-cross-age/test-*.parquet
69
+ - config_name: of-syn-cross-ethnicity
70
+ data_files:
71
+ - split: train
72
+ path: parquet/of-syn-cross-ethnicity/train-*.parquet
73
+ - split: validation
74
+ path: parquet/of-syn-cross-ethnicity/validation-*.parquet
75
+ - split: test
76
+ path: parquet/of-syn-cross-ethnicity/test-*.parquet
77
+ - config_name: of-syn-cross-bmi
78
+ data_files:
79
+ - split: train
80
+ path: parquet/of-syn-cross-bmi/train-*.parquet
81
+ - split: validation
82
+ path: parquet/of-syn-cross-bmi/validation-*.parquet
83
+ - split: test
84
+ path: parquet/of-syn-cross-bmi/test-*.parquet
85
+ - config_name: of-sta-itw-cs
86
+ data_files:
87
+ - split: train
88
+ path: parquet/of-sta-itw-cs/train-*.parquet
89
+ - split: validation
90
+ path: parquet/of-sta-itw-cs/validation-*.parquet
91
+ - split: test
92
+ path: parquet/of-sta-itw-cs/test-*.parquet
93
+ - config_name: of-sta-itw-cv
94
+ data_files:
95
+ - split: train
96
+ path: parquet/of-sta-itw-cv/train-*.parquet
97
+ - split: validation
98
+ path: parquet/of-sta-itw-cv/validation-*.parquet
99
+ - split: test
100
+ path: parquet/of-sta-itw-cv/test-*.parquet
101
+ - config_name: of-syn-itw
102
+ data_files:
103
+ - split: train
104
+ path: parquet/of-syn-itw/train-*.parquet
105
+ - split: validation
106
+ path: parquet/of-syn-itw/validation-*.parquet
107
+ - split: test
108
+ path: parquet/of-syn-itw/test-*.parquet
109
+ - config_name: cs
110
+ data_files:
111
+ - split: train
112
+ path: parquet/cs/train-*.parquet
113
+ - split: validation
114
+ path: parquet/cs/validation-*.parquet
115
+ - split: test
116
+ path: parquet/cs/test-*.parquet
117
+ - config_name: cv
118
+ data_files:
119
+ - split: train
120
+ path: parquet/cv/train-*.parquet
121
+ - split: validation
122
+ path: parquet/cv/validation-*.parquet
123
+ - split: test
124
+ path: parquet/cv/test-*.parquet
125
+ - config_name: caucafall
126
+ data_files:
127
+ - split: train
128
+ path: parquet/caucafall/train-*.parquet
129
+ - split: validation
130
+ path: parquet/caucafall/validation-*.parquet
131
+ - split: test
132
+ path: parquet/caucafall/test-*.parquet
133
+ - config_name: cmdfall
134
+ data_files:
135
+ - split: train
136
+ path: parquet/cmdfall/train-*.parquet
137
+ - split: validation
138
+ path: parquet/cmdfall/validation-*.parquet
139
+ - split: test
140
+ path: parquet/cmdfall/test-*.parquet
141
+ - config_name: edf
142
+ data_files:
143
+ - split: train
144
+ path: parquet/edf/train-*.parquet
145
+ - split: validation
146
+ path: parquet/edf/validation-*.parquet
147
+ - split: test
148
+ path: parquet/edf/test-*.parquet
149
+ - config_name: gmdcsa24
150
+ data_files:
151
+ - split: train
152
+ path: parquet/gmdcsa24/train-*.parquet
153
+ - split: validation
154
+ path: parquet/gmdcsa24/validation-*.parquet
155
+ - split: test
156
+ path: parquet/gmdcsa24/test-*.parquet
157
+ - config_name: le2i
158
+ data_files:
159
+ - split: train
160
+ path: parquet/le2i/train-*.parquet
161
+ - split: validation
162
+ path: parquet/le2i/validation-*.parquet
163
+ - split: test
164
+ path: parquet/le2i/test-*.parquet
165
+ - config_name: mcfd
166
+ data_files:
167
+ - split: train
168
+ path: parquet/mcfd/train-*.parquet
169
+ - config_name: occu
170
+ data_files:
171
+ - split: train
172
+ path: parquet/occu/train-*.parquet
173
+ - split: validation
174
+ path: parquet/occu/validation-*.parquet
175
+ - split: test
176
+ path: parquet/occu/test-*.parquet
177
+ - config_name: up_fall
178
+ data_files:
179
+ - split: train
180
+ path: parquet/up_fall/train-*.parquet
181
+ - split: validation
182
+ path: parquet/up_fall/validation-*.parquet
183
+ - split: test
184
+ path: parquet/up_fall/test-*.parquet
185
+ - config_name: cs-staged
186
+ data_files:
187
+ - split: train
188
+ path: parquet/cs-staged/train-*.parquet
189
+ - split: validation
190
+ path: parquet/cs-staged/validation-*.parquet
191
+ - split: test
192
+ path: parquet/cs-staged/test-*.parquet
193
+ - config_name: cv-staged
194
+ data_files:
195
+ - split: train
196
+ path: parquet/cv-staged/train-*.parquet
197
+ - split: validation
198
+ path: parquet/cv-staged/validation-*.parquet
199
+ - split: test
200
+ path: parquet/cv-staged/test-*.parquet
201
+ - config_name: cs-staged-wild
202
+ data_files:
203
+ - split: train
204
+ path: parquet/cs-staged-wild/train-*.parquet
205
+ - split: validation
206
+ path: parquet/cs-staged-wild/validation-*.parquet
207
+ - split: test
208
+ path: parquet/cs-staged-wild/test-*.parquet
209
+ - config_name: cv-staged-wild
210
+ data_files:
211
+ - split: train
212
+ path: parquet/cv-staged-wild/train-*.parquet
213
+ - split: validation
214
+ path: parquet/cv-staged-wild/validation-*.parquet
215
+ - split: test
216
+ path: parquet/cv-staged-wild/test-*.parquet
217
+ - config_name: OOPS
218
+ data_files:
219
+ - split: train
220
+ path: parquet/OOPS/train-*.parquet
221
+ - split: validation
222
+ path: parquet/OOPS/validation-*.parquet
223
+ - split: test
224
+ path: parquet/OOPS/test-*.parquet
225
+ dataset_info:
226
+ - config_name: labels
227
+ features:
228
+ - name: path
229
+ dtype: string
230
+ - name: label
231
+ dtype:
232
+ class_label:
233
+ names:
234
+ '0': walk
235
+ '1': fall
236
+ '2': fallen
237
+ '3': sit_down
238
+ '4': sitting
239
+ '5': lie_down
240
+ '6': lying
241
+ '7': stand_up
242
+ '8': standing
243
+ '9': other
244
+ '10': kneel_down
245
+ '11': kneeling
246
+ '12': squat_down
247
+ '13': squatting
248
+ '14': crawl
249
+ '15': jump
250
+ - name: start
251
+ dtype: float32
252
+ - name: end
253
+ dtype: float32
254
+ - name: subject
255
+ dtype: int32
256
+ - name: cam
257
+ dtype: int32
258
+ - name: dataset
259
+ dtype: string
260
+ splits:
261
+ - name: train
262
+ num_bytes: 0
263
+ num_examples: 52618
264
+ - config_name: labels-syn
265
+ features:
266
+ - name: path
267
+ dtype: string
268
+ - name: label
269
+ dtype:
270
+ class_label:
271
+ names:
272
+ '0': walk
273
+ '1': fall
274
+ '2': fallen
275
+ '3': sit_down
276
+ '4': sitting
277
+ '5': lie_down
278
+ '6': lying
279
+ '7': stand_up
280
+ '8': standing
281
+ '9': other
282
+ '10': kneel_down
283
+ '11': kneeling
284
+ '12': squat_down
285
+ '13': squatting
286
+ '14': crawl
287
+ '15': jump
288
+ - name: start
289
+ dtype: float32
290
+ - name: end
291
+ dtype: float32
292
+ - name: subject
293
+ dtype: int32
294
+ - name: cam
295
+ dtype: int32
296
+ - name: dataset
297
+ dtype: string
298
+ - name: age_group
299
+ dtype:
300
+ class_label:
301
+ names:
302
+ '0': toddlers_1_4
303
+ '1': children_5_12
304
+ '2': teenagers_13_17
305
+ '3': young_adults_18_34
306
+ '4': middle_aged_35_64
307
+ '5': elderly_65_plus
308
+ - name: gender_presentation
309
+ dtype:
310
+ class_label:
311
+ names:
312
+ '0': male
313
+ '1': female
314
+ - name: monk_skin_tone
315
+ dtype:
316
+ class_label:
317
+ names:
318
+ '0': mst1
319
+ '1': mst2
320
+ '2': mst3
321
+ '3': mst4
322
+ '4': mst5
323
+ '5': mst6
324
+ '6': mst7
325
+ '7': mst8
326
+ '8': mst9
327
+ '9': mst10
328
+ - name: race_ethnicity_omb
329
+ dtype:
330
+ class_label:
331
+ names:
332
+ '0': white
333
+ '1': black
334
+ '2': asian
335
+ '3': hispanic_latino
336
+ '4': aian
337
+ '5': nhpi
338
+ '6': mena
339
+ - name: bmi_band
340
+ dtype:
341
+ class_label:
342
+ names:
343
+ '0': underweight
344
+ '1': normal
345
+ '2': overweight
346
+ '3': obese
347
+ - name: height_band
348
+ dtype:
349
+ class_label:
350
+ names:
351
+ '0': short
352
+ '1': avg
353
+ '2': tall
354
+ - name: environment_category
355
+ dtype:
356
+ class_label:
357
+ names:
358
+ '0': indoor
359
+ '1': outdoor
360
+ - name: camera_shot
361
+ dtype:
362
+ class_label:
363
+ names:
364
+ '0': static_wide
365
+ '1': static_medium_wide
366
+ - name: speed
367
+ dtype:
368
+ class_label:
369
+ names:
370
+ '0': 24fps_rt
371
+ '1': 25fps_rt
372
+ '2': 30fps_rt
373
+ '3': std_rt
374
+ - name: camera_elevation
375
+ dtype:
376
+ class_label:
377
+ names:
378
+ '0': eye
379
+ '1': low
380
+ '2': high
381
+ '3': top
382
+ - name: camera_azimuth
383
+ dtype:
384
+ class_label:
385
+ names:
386
+ '0': front
387
+ '1': rear
388
+ '2': left
389
+ '3': right
390
+ - name: camera_distance
391
+ dtype:
392
+ class_label:
393
+ names:
394
+ '0': medium
395
+ '1': far
396
+ splits:
397
+ - name: train
398
+ num_bytes: 0
399
+ num_examples: 19228
400
+ - config_name: metadata-syn
401
+ features:
402
+ - name: path
403
+ dtype: string
404
+ - name: dataset
405
+ dtype: string
406
+ - name: age_group
407
+ dtype:
408
+ class_label:
409
+ names:
410
+ '0': toddlers_1_4
411
+ '1': children_5_12
412
+ '2': teenagers_13_17
413
+ '3': young_adults_18_34
414
+ '4': middle_aged_35_64
415
+ '5': elderly_65_plus
416
+ - name: gender_presentation
417
+ dtype:
418
+ class_label:
419
+ names:
420
+ '0': male
421
+ '1': female
422
+ - name: monk_skin_tone
423
+ dtype:
424
+ class_label:
425
+ names:
426
+ '0': mst1
427
+ '1': mst2
428
+ '2': mst3
429
+ '3': mst4
430
+ '4': mst5
431
+ '5': mst6
432
+ '6': mst7
433
+ '7': mst8
434
+ '8': mst9
435
+ '9': mst10
436
+ - name: race_ethnicity_omb
437
+ dtype:
438
+ class_label:
439
+ names:
440
+ '0': white
441
+ '1': black
442
+ '2': asian
443
+ '3': hispanic_latino
444
+ '4': aian
445
+ '5': nhpi
446
+ '6': mena
447
+ - name: bmi_band
448
+ dtype:
449
+ class_label:
450
+ names:
451
+ '0': underweight
452
+ '1': normal
453
+ '2': overweight
454
+ '3': obese
455
+ - name: height_band
456
+ dtype:
457
+ class_label:
458
+ names:
459
+ '0': short
460
+ '1': avg
461
+ '2': tall
462
+ - name: environment_category
463
+ dtype:
464
+ class_label:
465
+ names:
466
+ '0': indoor
467
+ '1': outdoor
468
+ - name: camera_shot
469
+ dtype:
470
+ class_label:
471
+ names:
472
+ '0': static_wide
473
+ '1': static_medium_wide
474
+ - name: speed
475
+ dtype:
476
+ class_label:
477
+ names:
478
+ '0': 24fps_rt
479
+ '1': 25fps_rt
480
+ '2': 30fps_rt
481
+ '3': std_rt
482
+ - name: camera_elevation
483
+ dtype:
484
+ class_label:
485
+ names:
486
+ '0': eye
487
+ '1': low
488
+ '2': high
489
+ '3': top
490
+ - name: camera_azimuth
491
+ dtype:
492
+ class_label:
493
+ names:
494
+ '0': front
495
+ '1': rear
496
+ '2': left
497
+ '3': right
498
+ - name: camera_distance
499
+ dtype:
500
+ class_label:
501
+ names:
502
+ '0': medium
503
+ '1': far
504
+ splits:
505
+ - name: train
506
+ num_bytes: 0
507
+ num_examples: 12000
508
+ - config_name: of-sta-cs
509
+ features:
510
+ - name: path
511
+ dtype: string
512
+ - name: label
513
+ dtype:
514
+ class_label:
515
+ names:
516
+ '0': walk
517
+ '1': fall
518
+ '2': fallen
519
+ '3': sit_down
520
+ '4': sitting
521
+ '5': lie_down
522
+ '6': lying
523
+ '7': stand_up
524
+ '8': standing
525
+ '9': other
526
+ '10': kneel_down
527
+ '11': kneeling
528
+ '12': squat_down
529
+ '13': squatting
530
+ '14': crawl
531
+ '15': jump
532
+ - name: start
533
+ dtype: float32
534
+ - name: end
535
+ dtype: float32
536
+ - name: subject
537
+ dtype: int32
538
+ - name: cam
539
+ dtype: int32
540
+ - name: dataset
541
+ dtype: string
542
+ splits:
543
+ - name: train
544
+ num_bytes: 0
545
+ num_examples: 26036
546
+ - name: validation
547
+ num_bytes: 0
548
+ num_examples: 4272
549
+ - name: test
550
+ num_bytes: 0
551
+ num_examples: 18664
552
+ - config_name: of-sta-cv
553
+ features:
554
+ - name: path
555
+ dtype: string
556
+ - name: label
557
+ dtype:
558
+ class_label:
559
+ names:
560
+ '0': walk
561
+ '1': fall
562
+ '2': fallen
563
+ '3': sit_down
564
+ '4': sitting
565
+ '5': lie_down
566
+ '6': lying
567
+ '7': stand_up
568
+ '8': standing
569
+ '9': other
570
+ '10': kneel_down
571
+ '11': kneeling
572
+ '12': squat_down
573
+ '13': squatting
574
+ '14': crawl
575
+ '15': jump
576
+ - name: start
577
+ dtype: float32
578
+ - name: end
579
+ dtype: float32
580
+ - name: subject
581
+ dtype: int32
582
+ - name: cam
583
+ dtype: int32
584
+ - name: dataset
585
+ dtype: string
586
+ splits:
587
+ - name: train
588
+ num_bytes: 0
589
+ num_examples: 8888
590
+ - name: validation
591
+ num_bytes: 0
592
+ num_examples: 8185
593
+ - name: test
594
+ num_bytes: 0
595
+ num_examples: 27199
596
+ - config_name: of-itw
597
+ features:
598
+ - name: path
599
+ dtype: string
600
+ - name: label
601
+ dtype:
602
+ class_label:
603
+ names:
604
+ '0': walk
605
+ '1': fall
606
+ '2': fallen
607
+ '3': sit_down
608
+ '4': sitting
609
+ '5': lie_down
610
+ '6': lying
611
+ '7': stand_up
612
+ '8': standing
613
+ '9': other
614
+ '10': kneel_down
615
+ '11': kneeling
616
+ '12': squat_down
617
+ '13': squatting
618
+ '14': crawl
619
+ '15': jump
620
+ - name: start
621
+ dtype: float32
622
+ - name: end
623
+ dtype: float32
624
+ - name: subject
625
+ dtype: int32
626
+ - name: cam
627
+ dtype: int32
628
+ - name: dataset
629
+ dtype: string
630
+ splits:
631
+ - name: train
632
+ num_bytes: 0
633
+ num_examples: 1023
634
+ - name: validation
635
+ num_bytes: 0
636
+ num_examples: 482
637
+ - name: test
638
+ num_bytes: 0
639
+ num_examples: 3673
640
+ - config_name: of-syn
641
+ features:
642
+ - name: path
643
+ dtype: string
644
+ - name: label
645
+ dtype:
646
+ class_label:
647
+ names:
648
+ '0': walk
649
+ '1': fall
650
+ '2': fallen
651
+ '3': sit_down
652
+ '4': sitting
653
+ '5': lie_down
654
+ '6': lying
655
+ '7': stand_up
656
+ '8': standing
657
+ '9': other
658
+ '10': kneel_down
659
+ '11': kneeling
660
+ '12': squat_down
661
+ '13': squatting
662
+ '14': crawl
663
+ '15': jump
664
+ - name: start
665
+ dtype: float32
666
+ - name: end
667
+ dtype: float32
668
+ - name: subject
669
+ dtype: int32
670
+ - name: cam
671
+ dtype: int32
672
+ - name: dataset
673
+ dtype: string
674
+ - name: age_group
675
+ dtype:
676
+ class_label:
677
+ names:
678
+ '0': toddlers_1_4
679
+ '1': children_5_12
680
+ '2': teenagers_13_17
681
+ '3': young_adults_18_34
682
+ '4': middle_aged_35_64
683
+ '5': elderly_65_plus
684
+ - name: gender_presentation
685
+ dtype:
686
+ class_label:
687
+ names:
688
+ '0': male
689
+ '1': female
690
+ - name: monk_skin_tone
691
+ dtype:
692
+ class_label:
693
+ names:
694
+ '0': mst1
695
+ '1': mst2
696
+ '2': mst3
697
+ '3': mst4
698
+ '4': mst5
699
+ '5': mst6
700
+ '6': mst7
701
+ '7': mst8
702
+ '8': mst9
703
+ '9': mst10
704
+ - name: race_ethnicity_omb
705
+ dtype:
706
+ class_label:
707
+ names:
708
+ '0': white
709
+ '1': black
710
+ '2': asian
711
+ '3': hispanic_latino
712
+ '4': aian
713
+ '5': nhpi
714
+ '6': mena
715
+ - name: bmi_band
716
+ dtype:
717
+ class_label:
718
+ names:
719
+ '0': underweight
720
+ '1': normal
721
+ '2': overweight
722
+ '3': obese
723
+ - name: height_band
724
+ dtype:
725
+ class_label:
726
+ names:
727
+ '0': short
728
+ '1': avg
729
+ '2': tall
730
+ - name: environment_category
731
+ dtype:
732
+ class_label:
733
+ names:
734
+ '0': indoor
735
+ '1': outdoor
736
+ - name: camera_shot
737
+ dtype:
738
+ class_label:
739
+ names:
740
+ '0': static_wide
741
+ '1': static_medium_wide
742
+ - name: speed
743
+ dtype:
744
+ class_label:
745
+ names:
746
+ '0': 24fps_rt
747
+ '1': 25fps_rt
748
+ '2': 30fps_rt
749
+ '3': std_rt
750
+ - name: camera_elevation
751
+ dtype:
752
+ class_label:
753
+ names:
754
+ '0': eye
755
+ '1': low
756
+ '2': high
757
+ '3': top
758
+ - name: camera_azimuth
759
+ dtype:
760
+ class_label:
761
+ names:
762
+ '0': front
763
+ '1': rear
764
+ '2': left
765
+ '3': right
766
+ - name: camera_distance
767
+ dtype:
768
+ class_label:
769
+ names:
770
+ '0': medium
771
+ '1': far
772
+ splits:
773
+ - name: train
774
+ num_bytes: 0
775
+ num_examples: 15344
776
+ - name: validation
777
+ num_bytes: 0
778
+ num_examples: 1956
779
+ - name: test
780
+ num_bytes: 0
781
+ num_examples: 1928
782
+ - config_name: of-syn-cross-age
783
+ features:
784
+ - name: path
785
+ dtype: string
786
+ - name: label
787
+ dtype:
788
+ class_label:
789
+ names:
790
+ '0': walk
791
+ '1': fall
792
+ '2': fallen
793
+ '3': sit_down
794
+ '4': sitting
795
+ '5': lie_down
796
+ '6': lying
797
+ '7': stand_up
798
+ '8': standing
799
+ '9': other
800
+ '10': kneel_down
801
+ '11': kneeling
802
+ '12': squat_down
803
+ '13': squatting
804
+ '14': crawl
805
+ '15': jump
806
+ - name: start
807
+ dtype: float32
808
+ - name: end
809
+ dtype: float32
810
+ - name: subject
811
+ dtype: int32
812
+ - name: cam
813
+ dtype: int32
814
+ - name: dataset
815
+ dtype: string
816
+ - name: age_group
817
+ dtype:
818
+ class_label:
819
+ names:
820
+ '0': toddlers_1_4
821
+ '1': children_5_12
822
+ '2': teenagers_13_17
823
+ '3': young_adults_18_34
824
+ '4': middle_aged_35_64
825
+ '5': elderly_65_plus
826
+ - name: gender_presentation
827
+ dtype:
828
+ class_label:
829
+ names:
830
+ '0': male
831
+ '1': female
832
+ - name: monk_skin_tone
833
+ dtype:
834
+ class_label:
835
+ names:
836
+ '0': mst1
837
+ '1': mst2
838
+ '2': mst3
839
+ '3': mst4
840
+ '4': mst5
841
+ '5': mst6
842
+ '6': mst7
843
+ '7': mst8
844
+ '8': mst9
845
+ '9': mst10
846
+ - name: race_ethnicity_omb
847
+ dtype:
848
+ class_label:
849
+ names:
850
+ '0': white
851
+ '1': black
852
+ '2': asian
853
+ '3': hispanic_latino
854
+ '4': aian
855
+ '5': nhpi
856
+ '6': mena
857
+ - name: bmi_band
858
+ dtype:
859
+ class_label:
860
+ names:
861
+ '0': underweight
862
+ '1': normal
863
+ '2': overweight
864
+ '3': obese
865
+ - name: height_band
866
+ dtype:
867
+ class_label:
868
+ names:
869
+ '0': short
870
+ '1': avg
871
+ '2': tall
872
+ - name: environment_category
873
+ dtype:
874
+ class_label:
875
+ names:
876
+ '0': indoor
877
+ '1': outdoor
878
+ - name: camera_shot
879
+ dtype:
880
+ class_label:
881
+ names:
882
+ '0': static_wide
883
+ '1': static_medium_wide
884
+ - name: speed
885
+ dtype:
886
+ class_label:
887
+ names:
888
+ '0': 24fps_rt
889
+ '1': 25fps_rt
890
+ '2': 30fps_rt
891
+ '3': std_rt
892
+ - name: camera_elevation
893
+ dtype:
894
+ class_label:
895
+ names:
896
+ '0': eye
897
+ '1': low
898
+ '2': high
899
+ '3': top
900
+ - name: camera_azimuth
901
+ dtype:
902
+ class_label:
903
+ names:
904
+ '0': front
905
+ '1': rear
906
+ '2': left
907
+ '3': right
908
+ - name: camera_distance
909
+ dtype:
910
+ class_label:
911
+ names:
912
+ '0': medium
913
+ '1': far
914
+ splits:
915
+ - name: train
916
+ num_bytes: 0
917
+ num_examples: 6364
918
+ - name: validation
919
+ num_bytes: 0
920
+ num_examples: 3183
921
+ - name: test
922
+ num_bytes: 0
923
+ num_examples: 9681
924
+ - config_name: of-syn-cross-ethnicity
925
+ features:
926
+ - name: path
927
+ dtype: string
928
+ - name: label
929
+ dtype:
930
+ class_label:
931
+ names:
932
+ '0': walk
933
+ '1': fall
934
+ '2': fallen
935
+ '3': sit_down
936
+ '4': sitting
937
+ '5': lie_down
938
+ '6': lying
939
+ '7': stand_up
940
+ '8': standing
941
+ '9': other
942
+ '10': kneel_down
943
+ '11': kneeling
944
+ '12': squat_down
945
+ '13': squatting
946
+ '14': crawl
947
+ '15': jump
948
+ - name: start
949
+ dtype: float32
950
+ - name: end
951
+ dtype: float32
952
+ - name: subject
953
+ dtype: int32
954
+ - name: cam
955
+ dtype: int32
956
+ - name: dataset
957
+ dtype: string
958
+ - name: age_group
959
+ dtype:
960
+ class_label:
961
+ names:
962
+ '0': toddlers_1_4
963
+ '1': children_5_12
964
+ '2': teenagers_13_17
965
+ '3': young_adults_18_34
966
+ '4': middle_aged_35_64
967
+ '5': elderly_65_plus
968
+ - name: gender_presentation
969
+ dtype:
970
+ class_label:
971
+ names:
972
+ '0': male
973
+ '1': female
974
+ - name: monk_skin_tone
975
+ dtype:
976
+ class_label:
977
+ names:
978
+ '0': mst1
979
+ '1': mst2
980
+ '2': mst3
981
+ '3': mst4
982
+ '4': mst5
983
+ '5': mst6
984
+ '6': mst7
985
+ '7': mst8
986
+ '8': mst9
987
+ '9': mst10
988
+ - name: race_ethnicity_omb
989
+ dtype:
990
+ class_label:
991
+ names:
992
+ '0': white
993
+ '1': black
994
+ '2': asian
995
+ '3': hispanic_latino
996
+ '4': aian
997
+ '5': nhpi
998
+ '6': mena
999
+ - name: bmi_band
1000
+ dtype:
1001
+ class_label:
1002
+ names:
1003
+ '0': underweight
1004
+ '1': normal
1005
+ '2': overweight
1006
+ '3': obese
1007
+ - name: height_band
1008
+ dtype:
1009
+ class_label:
1010
+ names:
1011
+ '0': short
1012
+ '1': avg
1013
+ '2': tall
1014
+ - name: environment_category
1015
+ dtype:
1016
+ class_label:
1017
+ names:
1018
+ '0': indoor
1019
+ '1': outdoor
1020
+ - name: camera_shot
1021
+ dtype:
1022
+ class_label:
1023
+ names:
1024
+ '0': static_wide
1025
+ '1': static_medium_wide
1026
+ - name: speed
1027
+ dtype:
1028
+ class_label:
1029
+ names:
1030
+ '0': 24fps_rt
1031
+ '1': 25fps_rt
1032
+ '2': 30fps_rt
1033
+ '3': std_rt
1034
+ - name: camera_elevation
1035
+ dtype:
1036
+ class_label:
1037
+ names:
1038
+ '0': eye
1039
+ '1': low
1040
+ '2': high
1041
+ '3': top
1042
+ - name: camera_azimuth
1043
+ dtype:
1044
+ class_label:
1045
+ names:
1046
+ '0': front
1047
+ '1': rear
1048
+ '2': left
1049
+ '3': right
1050
+ - name: camera_distance
1051
+ dtype:
1052
+ class_label:
1053
+ names:
1054
+ '0': medium
1055
+ '1': far
1056
+ splits:
1057
+ - name: train
1058
+ num_bytes: 0
1059
+ num_examples: 8267
1060
+ - name: validation
1061
+ num_bytes: 0
1062
+ num_examples: 2762
1063
+ - name: test
1064
+ num_bytes: 0
1065
+ num_examples: 8199
1066
+ - config_name: of-syn-cross-bmi
1067
+ features:
1068
+ - name: path
1069
+ dtype: string
1070
+ - name: label
1071
+ dtype:
1072
+ class_label:
1073
+ names:
1074
+ '0': walk
1075
+ '1': fall
1076
+ '2': fallen
1077
+ '3': sit_down
1078
+ '4': sitting
1079
+ '5': lie_down
1080
+ '6': lying
1081
+ '7': stand_up
1082
+ '8': standing
1083
+ '9': other
1084
+ '10': kneel_down
1085
+ '11': kneeling
1086
+ '12': squat_down
1087
+ '13': squatting
1088
+ '14': crawl
1089
+ '15': jump
1090
+ - name: start
1091
+ dtype: float32
1092
+ - name: end
1093
+ dtype: float32
1094
+ - name: subject
1095
+ dtype: int32
1096
+ - name: cam
1097
+ dtype: int32
1098
+ - name: dataset
1099
+ dtype: string
1100
+ - name: age_group
1101
+ dtype:
1102
+ class_label:
1103
+ names:
1104
+ '0': toddlers_1_4
1105
+ '1': children_5_12
1106
+ '2': teenagers_13_17
1107
+ '3': young_adults_18_34
1108
+ '4': middle_aged_35_64
1109
+ '5': elderly_65_plus
1110
+ - name: gender_presentation
1111
+ dtype:
1112
+ class_label:
1113
+ names:
1114
+ '0': male
1115
+ '1': female
1116
+ - name: monk_skin_tone
1117
+ dtype:
1118
+ class_label:
1119
+ names:
1120
+ '0': mst1
1121
+ '1': mst2
1122
+ '2': mst3
1123
+ '3': mst4
1124
+ '4': mst5
1125
+ '5': mst6
1126
+ '6': mst7
1127
+ '7': mst8
1128
+ '8': mst9
1129
+ '9': mst10
1130
+ - name: race_ethnicity_omb
1131
+ dtype:
1132
+ class_label:
1133
+ names:
1134
+ '0': white
1135
+ '1': black
1136
+ '2': asian
1137
+ '3': hispanic_latino
1138
+ '4': aian
1139
+ '5': nhpi
1140
+ '6': mena
1141
+ - name: bmi_band
1142
+ dtype:
1143
+ class_label:
1144
+ names:
1145
+ '0': underweight
1146
+ '1': normal
1147
+ '2': overweight
1148
+ '3': obese
1149
+ - name: height_band
1150
+ dtype:
1151
+ class_label:
1152
+ names:
1153
+ '0': short
1154
+ '1': avg
1155
+ '2': tall
1156
+ - name: environment_category
1157
+ dtype:
1158
+ class_label:
1159
+ names:
1160
+ '0': indoor
1161
+ '1': outdoor
1162
+ - name: camera_shot
1163
+ dtype:
1164
+ class_label:
1165
+ names:
1166
+ '0': static_wide
1167
+ '1': static_medium_wide
1168
+ - name: speed
1169
+ dtype:
1170
+ class_label:
1171
+ names:
1172
+ '0': 24fps_rt
1173
+ '1': 25fps_rt
1174
+ '2': 30fps_rt
1175
+ '3': std_rt
1176
+ - name: camera_elevation
1177
+ dtype:
1178
+ class_label:
1179
+ names:
1180
+ '0': eye
1181
+ '1': low
1182
+ '2': high
1183
+ '3': top
1184
+ - name: camera_azimuth
1185
+ dtype:
1186
+ class_label:
1187
+ names:
1188
+ '0': front
1189
+ '1': rear
1190
+ '2': left
1191
+ '3': right
1192
+ - name: camera_distance
1193
+ dtype:
1194
+ class_label:
1195
+ names:
1196
+ '0': medium
1197
+ '1': far
1198
+ splits:
1199
+ - name: train
1200
+ num_bytes: 0
1201
+ num_examples: 9675
1202
+ - name: validation
1203
+ num_bytes: 0
1204
+ num_examples: 4701
1205
+ - name: test
1206
+ num_bytes: 0
1207
+ num_examples: 4852
1208
+ - config_name: of-sta-itw-cs
1209
+ features:
1210
+ - name: path
1211
+ dtype: string
1212
+ - name: label
1213
+ dtype:
1214
+ class_label:
1215
+ names:
1216
+ '0': walk
1217
+ '1': fall
1218
+ '2': fallen
1219
+ '3': sit_down
1220
+ '4': sitting
1221
+ '5': lie_down
1222
+ '6': lying
1223
+ '7': stand_up
1224
+ '8': standing
1225
+ '9': other
1226
+ '10': kneel_down
1227
+ '11': kneeling
1228
+ '12': squat_down
1229
+ '13': squatting
1230
+ '14': crawl
1231
+ '15': jump
1232
+ - name: start
1233
+ dtype: float32
1234
+ - name: end
1235
+ dtype: float32
1236
+ - name: subject
1237
+ dtype: int32
1238
+ - name: cam
1239
+ dtype: int32
1240
+ - name: dataset
1241
+ dtype: string
1242
+ splits:
1243
+ - name: train
1244
+ num_bytes: 0
1245
+ num_examples: 26036
1246
+ - name: validation
1247
+ num_bytes: 0
1248
+ num_examples: 4272
1249
+ - name: test
1250
+ num_bytes: 0
1251
+ num_examples: 3673
1252
+ - config_name: of-sta-itw-cv
1253
+ features:
1254
+ - name: path
1255
+ dtype: string
1256
+ - name: label
1257
+ dtype:
1258
+ class_label:
1259
+ names:
1260
+ '0': walk
1261
+ '1': fall
1262
+ '2': fallen
1263
+ '3': sit_down
1264
+ '4': sitting
1265
+ '5': lie_down
1266
+ '6': lying
1267
+ '7': stand_up
1268
+ '8': standing
1269
+ '9': other
1270
+ '10': kneel_down
1271
+ '11': kneeling
1272
+ '12': squat_down
1273
+ '13': squatting
1274
+ '14': crawl
1275
+ '15': jump
1276
+ - name: start
1277
+ dtype: float32
1278
+ - name: end
1279
+ dtype: float32
1280
+ - name: subject
1281
+ dtype: int32
1282
+ - name: cam
1283
+ dtype: int32
1284
+ - name: dataset
1285
+ dtype: string
1286
+ splits:
1287
+ - name: train
1288
+ num_bytes: 0
1289
+ num_examples: 8888
1290
+ - name: validation
1291
+ num_bytes: 0
1292
+ num_examples: 8185
1293
+ - name: test
1294
+ num_bytes: 0
1295
+ num_examples: 3673
1296
+ - config_name: of-syn-itw
1297
+ features:
1298
+ - name: path
1299
+ dtype: string
1300
+ - name: label
1301
+ dtype:
1302
+ class_label:
1303
+ names:
1304
+ '0': walk
1305
+ '1': fall
1306
+ '2': fallen
1307
+ '3': sit_down
1308
+ '4': sitting
1309
+ '5': lie_down
1310
+ '6': lying
1311
+ '7': stand_up
1312
+ '8': standing
1313
+ '9': other
1314
+ '10': kneel_down
1315
+ '11': kneeling
1316
+ '12': squat_down
1317
+ '13': squatting
1318
+ '14': crawl
1319
+ '15': jump
1320
+ - name: start
1321
+ dtype: float32
1322
+ - name: end
1323
+ dtype: float32
1324
+ - name: subject
1325
+ dtype: int32
1326
+ - name: cam
1327
+ dtype: int32
1328
+ - name: dataset
1329
+ dtype: string
1330
+ splits:
1331
+ - name: train
1332
+ num_bytes: 0
1333
+ num_examples: 15344
1334
+ - name: validation
1335
+ num_bytes: 0
1336
+ num_examples: 1956
1337
+ - name: test
1338
+ num_bytes: 0
1339
+ num_examples: 3673
1340
+ - config_name: cs
1341
+ features:
1342
+ - name: path
1343
+ dtype: string
1344
+ - name: label
1345
+ dtype:
1346
+ class_label:
1347
+ names:
1348
+ '0': walk
1349
+ '1': fall
1350
+ '2': fallen
1351
+ '3': sit_down
1352
+ '4': sitting
1353
+ '5': lie_down
1354
+ '6': lying
1355
+ '7': stand_up
1356
+ '8': standing
1357
+ '9': other
1358
+ '10': kneel_down
1359
+ '11': kneeling
1360
+ '12': squat_down
1361
+ '13': squatting
1362
+ '14': crawl
1363
+ '15': jump
1364
+ - name: start
1365
+ dtype: float32
1366
+ - name: end
1367
+ dtype: float32
1368
+ - name: subject
1369
+ dtype: int32
1370
+ - name: cam
1371
+ dtype: int32
1372
+ - name: dataset
1373
+ dtype: string
1374
+ splits:
1375
+ - name: train
1376
+ num_bytes: 0
1377
+ num_examples: 27059
1378
+ - name: validation
1379
+ num_bytes: 0
1380
+ num_examples: 4754
1381
+ - name: test
1382
+ num_bytes: 0
1383
+ num_examples: 22337
1384
+ - config_name: cv
1385
+ features:
1386
+ - name: path
1387
+ dtype: string
1388
+ - name: label
1389
+ dtype:
1390
+ class_label:
1391
+ names:
1392
+ '0': walk
1393
+ '1': fall
1394
+ '2': fallen
1395
+ '3': sit_down
1396
+ '4': sitting
1397
+ '5': lie_down
1398
+ '6': lying
1399
+ '7': stand_up
1400
+ '8': standing
1401
+ '9': other
1402
+ '10': kneel_down
1403
+ '11': kneeling
1404
+ '12': squat_down
1405
+ '13': squatting
1406
+ '14': crawl
1407
+ '15': jump
1408
+ - name: start
1409
+ dtype: float32
1410
+ - name: end
1411
+ dtype: float32
1412
+ - name: subject
1413
+ dtype: int32
1414
+ - name: cam
1415
+ dtype: int32
1416
+ - name: dataset
1417
+ dtype: string
1418
+ splits:
1419
+ - name: train
1420
+ num_bytes: 0
1421
+ num_examples: 9911
1422
+ - name: validation
1423
+ num_bytes: 0
1424
+ num_examples: 8667
1425
+ - name: test
1426
+ num_bytes: 0
1427
+ num_examples: 30872
1428
+ - config_name: caucafall
1429
+ features:
1430
+ - name: path
1431
+ dtype: string
1432
+ - name: label
1433
+ dtype:
1434
+ class_label:
1435
+ names:
1436
+ '0': walk
1437
+ '1': fall
1438
+ '2': fallen
1439
+ '3': sit_down
1440
+ '4': sitting
1441
+ '5': lie_down
1442
+ '6': lying
1443
+ '7': stand_up
1444
+ '8': standing
1445
+ '9': other
1446
+ '10': kneel_down
1447
+ '11': kneeling
1448
+ '12': squat_down
1449
+ '13': squatting
1450
+ '14': crawl
1451
+ '15': jump
1452
+ - name: start
1453
+ dtype: float32
1454
+ - name: end
1455
+ dtype: float32
1456
+ - name: subject
1457
+ dtype: int32
1458
+ - name: cam
1459
+ dtype: int32
1460
+ - name: dataset
1461
+ dtype: string
1462
+ splits:
1463
+ - name: train
1464
+ num_bytes: 0
1465
+ num_examples: 176
1466
+ - name: validation
1467
+ num_bytes: 0
1468
+ num_examples: 25
1469
+ - name: test
1470
+ num_bytes: 0
1471
+ num_examples: 47
1472
+ - config_name: cmdfall
1473
+ features:
1474
+ - name: path
1475
+ dtype: string
1476
+ - name: label
1477
+ dtype:
1478
+ class_label:
1479
+ names:
1480
+ '0': walk
1481
+ '1': fall
1482
+ '2': fallen
1483
+ '3': sit_down
1484
+ '4': sitting
1485
+ '5': lie_down
1486
+ '6': lying
1487
+ '7': stand_up
1488
+ '8': standing
1489
+ '9': other
1490
+ '10': kneel_down
1491
+ '11': kneeling
1492
+ '12': squat_down
1493
+ '13': squatting
1494
+ '14': crawl
1495
+ '15': jump
1496
+ - name: start
1497
+ dtype: float32
1498
+ - name: end
1499
+ dtype: float32
1500
+ - name: subject
1501
+ dtype: int32
1502
+ - name: cam
1503
+ dtype: int32
1504
+ - name: dataset
1505
+ dtype: string
1506
+ splits:
1507
+ - name: train
1508
+ num_bytes: 0
1509
+ num_examples: 20884
1510
+ - name: validation
1511
+ num_bytes: 0
1512
+ num_examples: 3689
1513
+ - name: test
1514
+ num_bytes: 0
1515
+ num_examples: 17570
1516
+ - config_name: edf
1517
+ features:
1518
+ - name: path
1519
+ dtype: string
1520
+ - name: label
1521
+ dtype:
1522
+ class_label:
1523
+ names:
1524
+ '0': walk
1525
+ '1': fall
1526
+ '2': fallen
1527
+ '3': sit_down
1528
+ '4': sitting
1529
+ '5': lie_down
1530
+ '6': lying
1531
+ '7': stand_up
1532
+ '8': standing
1533
+ '9': other
1534
+ '10': kneel_down
1535
+ '11': kneeling
1536
+ '12': squat_down
1537
+ '13': squatting
1538
+ '14': crawl
1539
+ '15': jump
1540
+ - name: start
1541
+ dtype: float32
1542
+ - name: end
1543
+ dtype: float32
1544
+ - name: subject
1545
+ dtype: int32
1546
+ - name: cam
1547
+ dtype: int32
1548
+ - name: dataset
1549
+ dtype: string
1550
+ splits:
1551
+ - name: train
1552
+ num_bytes: 0
1553
+ num_examples: 302
1554
+ - name: validation
1555
+ num_bytes: 0
1556
+ num_examples: 78
1557
+ - name: test
1558
+ num_bytes: 0
1559
+ num_examples: 128
1560
+ - config_name: gmdcsa24
1561
+ features:
1562
+ - name: path
1563
+ dtype: string
1564
+ - name: label
1565
+ dtype:
1566
+ class_label:
1567
+ names:
1568
+ '0': walk
1569
+ '1': fall
1570
+ '2': fallen
1571
+ '3': sit_down
1572
+ '4': sitting
1573
+ '5': lie_down
1574
+ '6': lying
1575
+ '7': stand_up
1576
+ '8': standing
1577
+ '9': other
1578
+ '10': kneel_down
1579
+ '11': kneeling
1580
+ '12': squat_down
1581
+ '13': squatting
1582
+ '14': crawl
1583
+ '15': jump
1584
+ - name: start
1585
+ dtype: float32
1586
+ - name: end
1587
+ dtype: float32
1588
+ - name: subject
1589
+ dtype: int32
1590
+ - name: cam
1591
+ dtype: int32
1592
+ - name: dataset
1593
+ dtype: string
1594
+ splits:
1595
+ - name: train
1596
+ num_bytes: 0
1597
+ num_examples: 213
1598
+ - name: validation
1599
+ num_bytes: 0
1600
+ num_examples: 152
1601
+ - name: test
1602
+ num_bytes: 0
1603
+ num_examples: 93
1604
+ - config_name: le2i
1605
+ features:
1606
+ - name: path
1607
+ dtype: string
1608
+ - name: label
1609
+ dtype:
1610
+ class_label:
1611
+ names:
1612
+ '0': walk
1613
+ '1': fall
1614
+ '2': fallen
1615
+ '3': sit_down
1616
+ '4': sitting
1617
+ '5': lie_down
1618
+ '6': lying
1619
+ '7': stand_up
1620
+ '8': standing
1621
+ '9': other
1622
+ '10': kneel_down
1623
+ '11': kneeling
1624
+ '12': squat_down
1625
+ '13': squatting
1626
+ '14': crawl
1627
+ '15': jump
1628
+ - name: start
1629
+ dtype: float32
1630
+ - name: end
1631
+ dtype: float32
1632
+ - name: subject
1633
+ dtype: int32
1634
+ - name: cam
1635
+ dtype: int32
1636
+ - name: dataset
1637
+ dtype: string
1638
+ splits:
1639
+ - name: train
1640
+ num_bytes: 0
1641
+ num_examples: 670
1642
+ - name: validation
1643
+ num_bytes: 0
1644
+ num_examples: 94
1645
+ - name: test
1646
+ num_bytes: 0
1647
+ num_examples: 203
1648
+ - config_name: mcfd
1649
+ features:
1650
+ - name: path
1651
+ dtype: string
1652
+ - name: label
1653
+ dtype:
1654
+ class_label:
1655
+ names:
1656
+ '0': walk
1657
+ '1': fall
1658
+ '2': fallen
1659
+ '3': sit_down
1660
+ '4': sitting
1661
+ '5': lie_down
1662
+ '6': lying
1663
+ '7': stand_up
1664
+ '8': standing
1665
+ '9': other
1666
+ '10': kneel_down
1667
+ '11': kneeling
1668
+ '12': squat_down
1669
+ '13': squatting
1670
+ '14': crawl
1671
+ '15': jump
1672
+ - name: start
1673
+ dtype: float32
1674
+ - name: end
1675
+ dtype: float32
1676
+ - name: subject
1677
+ dtype: int32
1678
+ - name: cam
1679
+ dtype: int32
1680
+ - name: dataset
1681
+ dtype: string
1682
+ splits:
1683
+ - name: train
1684
+ num_bytes: 0
1685
+ num_examples: 1352
1686
+ - config_name: occu
1687
+ features:
1688
+ - name: path
1689
+ dtype: string
1690
+ - name: label
1691
+ dtype:
1692
+ class_label:
1693
+ names:
1694
+ '0': walk
1695
+ '1': fall
1696
+ '2': fallen
1697
+ '3': sit_down
1698
+ '4': sitting
1699
+ '5': lie_down
1700
+ '6': lying
1701
+ '7': stand_up
1702
+ '8': standing
1703
+ '9': other
1704
+ '10': kneel_down
1705
+ '11': kneeling
1706
+ '12': squat_down
1707
+ '13': squatting
1708
+ '14': crawl
1709
+ '15': jump
1710
+ - name: start
1711
+ dtype: float32
1712
+ - name: end
1713
+ dtype: float32
1714
+ - name: subject
1715
+ dtype: int32
1716
+ - name: cam
1717
+ dtype: int32
1718
+ - name: dataset
1719
+ dtype: string
1720
+ splits:
1721
+ - name: train
1722
+ num_bytes: 0
1723
+ num_examples: 289
1724
+ - name: validation
1725
+ num_bytes: 0
1726
+ num_examples: 94
1727
+ - name: test
1728
+ num_bytes: 0
1729
+ num_examples: 101
1730
+ - config_name: up_fall
1731
+ features:
1732
+ - name: path
1733
+ dtype: string
1734
+ - name: label
1735
+ dtype:
1736
+ class_label:
1737
+ names:
1738
+ '0': walk
1739
+ '1': fall
1740
+ '2': fallen
1741
+ '3': sit_down
1742
+ '4': sitting
1743
+ '5': lie_down
1744
+ '6': lying
1745
+ '7': stand_up
1746
+ '8': standing
1747
+ '9': other
1748
+ '10': kneel_down
1749
+ '11': kneeling
1750
+ '12': squat_down
1751
+ '13': squatting
1752
+ '14': crawl
1753
+ '15': jump
1754
+ - name: start
1755
+ dtype: float32
1756
+ - name: end
1757
+ dtype: float32
1758
+ - name: subject
1759
+ dtype: int32
1760
+ - name: cam
1761
+ dtype: int32
1762
+ - name: dataset
1763
+ dtype: string
1764
+ splits:
1765
+ - name: train
1766
+ num_bytes: 0
1767
+ num_examples: 2150
1768
+ - name: validation
1769
+ num_bytes: 0
1770
+ num_examples: 140
1771
+ - name: test
1772
+ num_bytes: 0
1773
+ num_examples: 522
1774
+ - config_name: cs-staged
1775
+ features:
1776
+ - name: path
1777
+ dtype: string
1778
+ - name: label
1779
+ dtype:
1780
+ class_label:
1781
+ names:
1782
+ '0': walk
1783
+ '1': fall
1784
+ '2': fallen
1785
+ '3': sit_down
1786
+ '4': sitting
1787
+ '5': lie_down
1788
+ '6': lying
1789
+ '7': stand_up
1790
+ '8': standing
1791
+ '9': other
1792
+ '10': kneel_down
1793
+ '11': kneeling
1794
+ '12': squat_down
1795
+ '13': squatting
1796
+ '14': crawl
1797
+ '15': jump
1798
+ - name: start
1799
+ dtype: float32
1800
+ - name: end
1801
+ dtype: float32
1802
+ - name: subject
1803
+ dtype: int32
1804
+ - name: cam
1805
+ dtype: int32
1806
+ - name: dataset
1807
+ dtype: string
1808
+ splits:
1809
+ - name: train
1810
+ num_bytes: 0
1811
+ num_examples: 26036
1812
+ - name: validation
1813
+ num_bytes: 0
1814
+ num_examples: 4272
1815
+ - name: test
1816
+ num_bytes: 0
1817
+ num_examples: 18664
1818
+ - config_name: cv-staged
1819
+ features:
1820
+ - name: path
1821
+ dtype: string
1822
+ - name: label
1823
+ dtype:
1824
+ class_label:
1825
+ names:
1826
+ '0': walk
1827
+ '1': fall
1828
+ '2': fallen
1829
+ '3': sit_down
1830
+ '4': sitting
1831
+ '5': lie_down
1832
+ '6': lying
1833
+ '7': stand_up
1834
+ '8': standing
1835
+ '9': other
1836
+ '10': kneel_down
1837
+ '11': kneeling
1838
+ '12': squat_down
1839
+ '13': squatting
1840
+ '14': crawl
1841
+ '15': jump
1842
+ - name: start
1843
+ dtype: float32
1844
+ - name: end
1845
+ dtype: float32
1846
+ - name: subject
1847
+ dtype: int32
1848
+ - name: cam
1849
+ dtype: int32
1850
+ - name: dataset
1851
+ dtype: string
1852
+ splits:
1853
+ - name: train
1854
+ num_bytes: 0
1855
+ num_examples: 8888
1856
+ - name: validation
1857
+ num_bytes: 0
1858
+ num_examples: 8185
1859
+ - name: test
1860
+ num_bytes: 0
1861
+ num_examples: 27199
1862
+ - config_name: cs-staged-wild
1863
+ features:
1864
+ - name: path
1865
+ dtype: string
1866
+ - name: label
1867
+ dtype:
1868
+ class_label:
1869
+ names:
1870
+ '0': walk
1871
+ '1': fall
1872
+ '2': fallen
1873
+ '3': sit_down
1874
+ '4': sitting
1875
+ '5': lie_down
1876
+ '6': lying
1877
+ '7': stand_up
1878
+ '8': standing
1879
+ '9': other
1880
+ '10': kneel_down
1881
+ '11': kneeling
1882
+ '12': squat_down
1883
+ '13': squatting
1884
+ '14': crawl
1885
+ '15': jump
1886
+ - name: start
1887
+ dtype: float32
1888
+ - name: end
1889
+ dtype: float32
1890
+ - name: subject
1891
+ dtype: int32
1892
+ - name: cam
1893
+ dtype: int32
1894
+ - name: dataset
1895
+ dtype: string
1896
+ splits:
1897
+ - name: train
1898
+ num_bytes: 0
1899
+ num_examples: 26036
1900
+ - name: validation
1901
+ num_bytes: 0
1902
+ num_examples: 4272
1903
+ - name: test
1904
+ num_bytes: 0
1905
+ num_examples: 3673
1906
+ - config_name: cv-staged-wild
1907
+ features:
1908
+ - name: path
1909
+ dtype: string
1910
+ - name: label
1911
+ dtype:
1912
+ class_label:
1913
+ names:
1914
+ '0': walk
1915
+ '1': fall
1916
+ '2': fallen
1917
+ '3': sit_down
1918
+ '4': sitting
1919
+ '5': lie_down
1920
+ '6': lying
1921
+ '7': stand_up
1922
+ '8': standing
1923
+ '9': other
1924
+ '10': kneel_down
1925
+ '11': kneeling
1926
+ '12': squat_down
1927
+ '13': squatting
1928
+ '14': crawl
1929
+ '15': jump
1930
+ - name: start
1931
+ dtype: float32
1932
+ - name: end
1933
+ dtype: float32
1934
+ - name: subject
1935
+ dtype: int32
1936
+ - name: cam
1937
+ dtype: int32
1938
+ - name: dataset
1939
+ dtype: string
1940
+ splits:
1941
+ - name: train
1942
+ num_bytes: 0
1943
+ num_examples: 8888
1944
+ - name: validation
1945
+ num_bytes: 0
1946
+ num_examples: 8185
1947
+ - name: test
1948
+ num_bytes: 0
1949
+ num_examples: 3673
1950
+ - config_name: OOPS
1951
+ features:
1952
+ - name: path
1953
+ dtype: string
1954
+ - name: label
1955
+ dtype:
1956
+ class_label:
1957
+ names:
1958
+ '0': walk
1959
+ '1': fall
1960
+ '2': fallen
1961
+ '3': sit_down
1962
+ '4': sitting
1963
+ '5': lie_down
1964
+ '6': lying
1965
+ '7': stand_up
1966
+ '8': standing
1967
+ '9': other
1968
+ '10': kneel_down
1969
+ '11': kneeling
1970
+ '12': squat_down
1971
+ '13': squatting
1972
+ '14': crawl
1973
+ '15': jump
1974
+ - name: start
1975
+ dtype: float32
1976
+ - name: end
1977
+ dtype: float32
1978
+ - name: subject
1979
+ dtype: int32
1980
+ - name: cam
1981
+ dtype: int32
1982
+ - name: dataset
1983
+ dtype: string
1984
+ splits:
1985
+ - name: train
1986
+ num_bytes: 0
1987
+ num_examples: 1023
1988
+ - name: validation
1989
+ num_bytes: 0
1990
+ num_examples: 482
1991
+ - name: test
1992
+ num_bytes: 0
1993
+ num_examples: 3673
1994
  ---
1995
  [![License: CC BY-NC-SA 4.0](https://img.shields.io/badge/License-CC%20BY--NC--SA%204.0-lightgrey.svg)](https://creativecommons.org/licenses/by-nc-sa/4.0/)
1996
  </br>
 
2043
 
2044
  The repository is organized as follows:
2045
 
2046
+ - `omnifall_builder.py` - Dataset builder (reference, not used by HF directly)
2047
+ - `parquet/` - Pre-built parquet files for all configs (used by `load_dataset`)
2048
  - `labels/` - CSV files containing temporal segment annotations
2049
  - Staged/OOPS labels: 7 columns (`path, label, start, end, subject, cam, dataset`)
2050
  - OF-Syn labels: 19 columns (7 core + 12 demographic/scene metadata)
 
2109
 
2110
  ## Evaluation Protocols
2111
 
2112
+ All configurations are loaded via `load_dataset("simplexsigil2/omnifall", "<config_name>")`.
2113
 
2114
  ### Labels (no train/val/test splits)
2115
  - `labels` (default): All staged + OOPS labels (52k segments, 7 columns)
2116
  - `labels-syn`: OF-Syn labels with demographic metadata (19k segments, 19 columns)
2117
  - `metadata-syn`: OF-Syn video-level metadata (12k videos)
2118
+ - `framewise-syn`: OF-Syn frame-wise HDF5 labels (81 labels per video). **Requires the `omnifall` package (coming soon).**
2119
 
2120
  ### OF-Staged Configs
2121
  - `of-sta-cs`: 8 staged datasets, cross-subject splits
 
2124
  ### OF-ItW Config
2125
  - `of-itw`: OOPS-Fall in-the-wild genuine accidents
2126
 
2127
+ Video loading requires the `omnifall` package (coming soon). See examples below.
2128
 
2129
  ### OF-Syn Configs
2130
  - `of-syn`: Fixed randomized 80/10/10 split
 
2132
  - `of-syn-cross-ethnicity`: Cross-ethnicity split
2133
  - `of-syn-cross-bmi`: Cross-BMI split (train: normal/underweight, test: obese)
2134
 
2135
+ Video loading for OF-Syn configs requires the `omnifall` package (coming soon).
2136
 
2137
  ### Cross-Domain Evaluation
2138
  - `of-sta-itw-cs`: Train/val on staged CS, test on OOPS
 
2160
 
2161
  ## Examples
2162
 
2163
+ For a complete interactive walkthrough of all configs, video loading, and label visualization, see the [example notebook](omnifall_dataset_examples.ipynb).
2164
 
2165
  ```python
2166
  from datasets import load_dataset
 
2190
  syn_labels = load_dataset("simplexsigil2/omnifall", "labels-syn")["train"]
2191
  ```
2192
 
2193
+ ### Loading Videos
2194
 
2195
+ Video loading (OF-Syn, OF-ItW, and cross-domain configs) requires the `omnifall` Python package, which will be available on PyPI soon. The package handles video download, caching, and integration with HuggingFace datasets.
 
2196
 
2197
+ For OOPS videos specifically, you can prepare them manually using the included script:
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2198
 
2199
  ```bash
 
2200
  python prepare_oops_videos.py --output_dir /path/to/oops_prepared
2201
  ```
2202
 
2203
+ The preparation streams the full OOPS archive from the original source and extracts only the 818 videos used in OF-ItW. The archive is streamed and never written to disk, so only ~2.6GB of disk space is needed. If you already have the OOPS archive downloaded locally, pass it with `--oops_archive /path/to/video_and_anns.tar.gz`.
 
 
 
 
 
 
 
 
 
 
 
 
2204
 
2205
  ## Label definitions
2206
 
generate_parquet.py ADDED
@@ -0,0 +1,428 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """One-time script to generate parquet files for all OmniFall HF configs.
2
+
3
+ Created for the HF datasets 4.6 migration (dataset scripts no longer supported).
4
+ Generates parquet files that enable native load_dataset() without custom builder code.
5
+ Can be safely deleted after parquet files are committed to the Hub.
6
+
7
+ Usage:
8
+ python generate_parquet.py
9
+ """
10
+
11
+ import os
12
+ from pathlib import Path
13
+
14
+ import numpy as np
15
+ import pandas as pd
16
+
17
+ REPO_ROOT = Path(__file__).parent
18
+ PARQUET_DIR = REPO_ROOT / "parquet"
19
+
20
+ # ---- Label and split file paths ----
21
+
22
+ STAGED_DATASETS = [
23
+ "caucafall", "cmdfall", "edf", "gmdcsa24",
24
+ "le2i", "mcfd", "occu", "up_fall",
25
+ ]
26
+
27
+ # Label CSV filenames (note: GMDCSA24 has capitalized filename)
28
+ STAGED_LABEL_FILES = {
29
+ "caucafall": "labels/caucafall.csv",
30
+ "cmdfall": "labels/cmdfall.csv",
31
+ "edf": "labels/edf.csv",
32
+ "gmdcsa24": "labels/GMDCSA24.csv",
33
+ "le2i": "labels/le2i.csv",
34
+ "mcfd": "labels/mcfd.csv",
35
+ "occu": "labels/occu.csv",
36
+ "up_fall": "labels/up_fall.csv",
37
+ }
38
+ ITW_LABEL_FILE = "labels/OOPS.csv"
39
+ SYN_LABEL_FILE = "labels/of-syn.csv"
40
+ METADATA_FILE = "videos/metadata.csv"
41
+
42
+ CORE_COLUMNS = ["path", "label", "start", "end", "subject", "cam", "dataset"]
43
+ DEMOGRAPHIC_COLUMNS = [
44
+ "age_group", "gender_presentation", "monk_skin_tone",
45
+ "race_ethnicity_omb", "bmi_band", "height_band",
46
+ "environment_category", "camera_shot", "speed",
47
+ "camera_elevation", "camera_azimuth", "camera_distance",
48
+ ]
49
+ SYN_COLUMNS = CORE_COLUMNS + DEMOGRAPHIC_COLUMNS
50
+ METADATA_COLUMNS = ["path", "dataset"] + DEMOGRAPHIC_COLUMNS
51
+
52
+ # ---- Deprecated aliases ----
53
+
54
+ DEPRECATED_ALIASES = {
55
+ "cs-staged": "of-sta-cs",
56
+ "cv-staged": "of-sta-cv",
57
+ "cs-staged-wild": "of-sta-itw-cs",
58
+ "cv-staged-wild": "of-sta-itw-cv",
59
+ "OOPS": "of-itw",
60
+ }
61
+
62
+
63
+ # ---- Helpers ----
64
+
65
+ def load_csv(relpath):
66
+ """Load a CSV file relative to REPO_ROOT."""
67
+ return pd.read_csv(REPO_ROOT / relpath)
68
+
69
+
70
+ def load_staged_labels(datasets=None):
71
+ """Load and concatenate staged label CSVs."""
72
+ if datasets is None:
73
+ datasets = STAGED_DATASETS
74
+ dfs = [load_csv(STAGED_LABEL_FILES[ds]) for ds in datasets]
75
+ return pd.concat(dfs, ignore_index=True)
76
+
77
+
78
+ def load_itw_labels():
79
+ """Load OOPS/ItW labels."""
80
+ return load_csv(ITW_LABEL_FILE)
81
+
82
+
83
+ def load_syn_labels():
84
+ """Load OF-Syn labels (19-col)."""
85
+ return load_csv(SYN_LABEL_FILE)
86
+
87
+
88
+ def staged_split_files(split_type, split_name):
89
+ """Return list of split CSV relative paths for all 8 staged datasets."""
90
+ return [f"splits/{split_type}/{ds}/{split_name}.csv" for ds in STAGED_DATASETS]
91
+
92
+
93
+ def merge_split_labels(split_files, labels_df):
94
+ """Merge split paths with labels, replicating _gen_split_merge logic."""
95
+ split_dfs = [load_csv(sf) for sf in split_files]
96
+ split_df = pd.concat(split_dfs, ignore_index=True)
97
+ merged = pd.merge(split_df, labels_df, on="path", how="left")
98
+ # Drop rows where the path didn't match any label (orphaned split entries)
99
+ unmatched = merged["label"].isna()
100
+ if unmatched.any():
101
+ n = unmatched.sum()
102
+ paths = merged.loc[unmatched, "path"].tolist()
103
+ print(f" WARNING: Dropping {n} unmatched path(s): {paths}")
104
+ merged = merged[~unmatched].reset_index(drop=True)
105
+ return merged
106
+
107
+
108
+ def cast_core_dtypes(df):
109
+ """Cast core columns to correct dtypes for parquet/ClassLabel."""
110
+ df = df.copy()
111
+ df["path"] = df["path"].astype(str)
112
+ df["label"] = df["label"].astype(int)
113
+ df["start"] = df["start"].astype(np.float32)
114
+ df["end"] = df["end"].astype(np.float32)
115
+ df["subject"] = df["subject"].astype(np.int32)
116
+ df["cam"] = df["cam"].astype(np.int32)
117
+ df["dataset"] = df["dataset"].astype(str)
118
+ return df
119
+
120
+
121
+ def cast_demographic_dtypes(df):
122
+ """Cast demographic columns to string (for ClassLabel encoding)."""
123
+ df = df.copy()
124
+ for col in DEMOGRAPHIC_COLUMNS:
125
+ if col in df.columns:
126
+ df[col] = df[col].astype(str)
127
+ return df
128
+
129
+
130
+ def select_and_cast(df, columns, schema="core"):
131
+ """Select columns and cast dtypes."""
132
+ df = df[columns].copy()
133
+ if schema in ("core", "syn"):
134
+ df = cast_core_dtypes(df)
135
+ if schema in ("syn", "metadata"):
136
+ df = cast_demographic_dtypes(df)
137
+ return df
138
+
139
+
140
+ def write_parquet(df, config_name, split_name):
141
+ """Write a dataframe as a parquet file in the expected layout.
142
+
143
+ Returns the output path, or None if the dataframe is empty (Arrow can't
144
+ handle 0-row parquet files).
145
+ """
146
+ if len(df) == 0:
147
+ print(f" SKIP {config_name}/{split_name}: 0 rows (not written)")
148
+ return None
149
+ out_dir = PARQUET_DIR / config_name
150
+ out_dir.mkdir(parents=True, exist_ok=True)
151
+ out_path = out_dir / f"{split_name}-00000-of-00001.parquet"
152
+ df.to_parquet(out_path, index=False)
153
+ return out_path
154
+
155
+
156
+ def generate_split_config(config_name, split_type, split_files_fn, labels_df, columns,
157
+ schema="core"):
158
+ """Generate train/val/test parquet files for a split-based config."""
159
+ results = {}
160
+ for split_name, csv_name in [("train", "train"), ("validation", "val"), ("test", "test")]:
161
+ sf = split_files_fn(split_type, csv_name)
162
+ merged = merge_split_labels(sf, labels_df)
163
+ df = select_and_cast(merged, columns, schema)
164
+ path = write_parquet(df, config_name, split_name)
165
+ results[split_name] = len(df)
166
+ return results
167
+
168
+
169
+ def copy_parquet(source_config, target_config):
170
+ """Copy parquet files from source config to target config (for deprecated aliases)."""
171
+ src_dir = PARQUET_DIR / source_config
172
+ dst_dir = PARQUET_DIR / target_config
173
+ dst_dir.mkdir(parents=True, exist_ok=True)
174
+ results = {}
175
+ for src_file in sorted(src_dir.glob("*.parquet")):
176
+ dst_file = dst_dir / src_file.name
177
+ # Read and re-write to avoid symlink issues with git
178
+ df = pd.read_parquet(src_file)
179
+ df.to_parquet(dst_file, index=False)
180
+ split_name = src_file.stem.split("-")[0]
181
+ results[split_name] = len(df)
182
+ return results
183
+
184
+
185
+ # ---- Config generators ----
186
+
187
+ def gen_labels():
188
+ """Config: labels - All staged + OOPS labels, single train split."""
189
+ staged = load_staged_labels()
190
+ itw = load_itw_labels()
191
+ df = pd.concat([staged, itw], ignore_index=True)
192
+ df = select_and_cast(df, CORE_COLUMNS, "core")
193
+ path = write_parquet(df, "labels", "train")
194
+ return {"labels": {"train": len(df)}}
195
+
196
+
197
+ def gen_labels_syn():
198
+ """Config: labels-syn - OF-Syn labels with demographics, single train split."""
199
+ df = load_syn_labels()
200
+ df = select_and_cast(df, SYN_COLUMNS, "syn")
201
+ path = write_parquet(df, "labels-syn", "train")
202
+ return {"labels-syn": {"train": len(df)}}
203
+
204
+
205
+ def gen_metadata_syn():
206
+ """Config: metadata-syn - OF-Syn video-level metadata, single train split."""
207
+ df = load_csv(METADATA_FILE)
208
+ # Select only the metadata columns (drop prompt_id)
209
+ metadata_cols = ["path"] + DEMOGRAPHIC_COLUMNS
210
+ available = [c for c in metadata_cols if c in df.columns]
211
+ df = df[available].drop_duplicates(subset=["path"]).reset_index(drop=True)
212
+ df["dataset"] = "of-syn"
213
+ df = select_and_cast(df, METADATA_COLUMNS, "metadata")
214
+ path = write_parquet(df, "metadata-syn", "train")
215
+ return {"metadata-syn": {"train": len(df)}}
216
+
217
+
218
+ def gen_of_sta(split_type):
219
+ """Config: of-sta-cs / of-sta-cv - 8 staged datasets combined."""
220
+ config_name = f"of-sta-{split_type}"
221
+ labels = load_staged_labels()
222
+ results = generate_split_config(
223
+ config_name, split_type,
224
+ lambda st, sn: staged_split_files(st, sn),
225
+ labels, CORE_COLUMNS, "core",
226
+ )
227
+ return {config_name: results}
228
+
229
+
230
+ def gen_of_itw():
231
+ """Config: of-itw - OOPS-Fall in-the-wild."""
232
+ labels = load_itw_labels()
233
+ results = {}
234
+ for split_name, csv_name in [("train", "train"), ("validation", "val"), ("test", "test")]:
235
+ sf = [f"splits/cs/OOPS/{csv_name}.csv"]
236
+ merged = merge_split_labels(sf, labels)
237
+ df = select_and_cast(merged, CORE_COLUMNS, "core")
238
+ write_parquet(df, "of-itw", split_name)
239
+ results[split_name] = len(df)
240
+ return {"of-itw": results}
241
+
242
+
243
+ def gen_of_syn(split_type, config_name):
244
+ """Config: of-syn variants."""
245
+ labels = load_syn_labels()
246
+ results = {}
247
+ for split_name, csv_name in [("train", "train"), ("validation", "val"), ("test", "test")]:
248
+ sf = [f"splits/syn/{split_type}/{csv_name}.csv"]
249
+ merged = merge_split_labels(sf, labels)
250
+ df = select_and_cast(merged, SYN_COLUMNS, "syn")
251
+ write_parquet(df, config_name, split_name)
252
+ results[split_name] = len(df)
253
+ return {config_name: results}
254
+
255
+
256
+ def gen_crossdomain(config_name, train_split_type, train_source, test_split_type,
257
+ test_source):
258
+ """Config: cross-domain configs (train from one source, test from another)."""
259
+ # Load labels for train and test sources
260
+ if train_source == "staged":
261
+ train_labels = load_staged_labels()
262
+ train_split_fn = lambda sn: staged_split_files(train_split_type, sn)
263
+ elif train_source == "syn":
264
+ train_labels = load_syn_labels()
265
+ train_split_fn = lambda sn: [f"splits/syn/{train_split_type}/{sn}.csv"]
266
+ else:
267
+ raise ValueError(f"Unknown train_source: {train_source}")
268
+
269
+ if test_source == "itw":
270
+ test_labels = load_itw_labels()
271
+ test_split_fn = lambda sn: [f"splits/{test_split_type}/OOPS/{sn}.csv"]
272
+ else:
273
+ raise ValueError(f"Unknown test_source: {test_source}")
274
+
275
+ results = {}
276
+
277
+ # Train and val come from train source
278
+ for split_name, csv_name in [("train", "train"), ("validation", "val")]:
279
+ sf = train_split_fn(csv_name)
280
+ merged = merge_split_labels(sf, train_labels)
281
+ # Cross-domain always uses core 7-col schema
282
+ df = select_and_cast(merged, CORE_COLUMNS, "core")
283
+ write_parquet(df, config_name, split_name)
284
+ results[split_name] = len(df)
285
+
286
+ # Test comes from test source
287
+ sf = test_split_fn("test")
288
+ merged = merge_split_labels(sf, test_labels)
289
+ df = select_and_cast(merged, CORE_COLUMNS, "core")
290
+ write_parquet(df, config_name, "test")
291
+ results["test"] = len(df)
292
+
293
+ return {config_name: results}
294
+
295
+
296
+ def gen_aggregate(split_type):
297
+ """Config: cs / cv - all staged + OOPS combined."""
298
+ config_name = split_type
299
+ all_labels = pd.concat([load_staged_labels(), load_itw_labels()], ignore_index=True)
300
+ results = {}
301
+ for split_name, csv_name in [("train", "train"), ("validation", "val"), ("test", "test")]:
302
+ sf = staged_split_files(split_type, csv_name) + [
303
+ f"splits/{split_type}/OOPS/{csv_name}.csv"
304
+ ]
305
+ merged = merge_split_labels(sf, all_labels)
306
+ df = select_and_cast(merged, CORE_COLUMNS, "core")
307
+ write_parquet(df, config_name, split_name)
308
+ results[split_name] = len(df)
309
+ return {config_name: results}
310
+
311
+
312
+ def gen_individual(ds_name):
313
+ """Config: individual dataset with CS splits."""
314
+ labels = load_csv(STAGED_LABEL_FILES[ds_name])
315
+ results = {}
316
+ for split_name, csv_name in [("train", "train"), ("validation", "val"), ("test", "test")]:
317
+ sf = [f"splits/cs/{ds_name}/{csv_name}.csv"]
318
+ merged = merge_split_labels(sf, labels)
319
+ df = select_and_cast(merged, CORE_COLUMNS, "core")
320
+ write_parquet(df, ds_name, split_name)
321
+ results[split_name] = len(df)
322
+ return {ds_name: results}
323
+
324
+
325
+ # ---- Main ----
326
+
327
+ def main():
328
+ print(f"Generating parquet files in: {PARQUET_DIR}")
329
+ PARQUET_DIR.mkdir(parents=True, exist_ok=True)
330
+
331
+ all_results = {}
332
+
333
+ # Labels configs (single train split)
334
+ print("\n--- Labels configs ---")
335
+ for gen_fn in [gen_labels, gen_labels_syn, gen_metadata_syn]:
336
+ result = gen_fn()
337
+ all_results.update(result)
338
+ for config, splits in result.items():
339
+ for split, count in splits.items():
340
+ print(f" {config}/{split}: {count} rows")
341
+
342
+ # OF-Staged configs
343
+ print("\n--- OF-Staged configs ---")
344
+ for st in ["cs", "cv"]:
345
+ result = gen_of_sta(st)
346
+ all_results.update(result)
347
+ for config, splits in result.items():
348
+ for split, count in splits.items():
349
+ print(f" {config}/{split}: {count} rows")
350
+
351
+ # OF-ItW config
352
+ print("\n--- OF-ItW config ---")
353
+ result = gen_of_itw()
354
+ all_results.update(result)
355
+ for config, splits in result.items():
356
+ for split, count in splits.items():
357
+ print(f" {config}/{split}: {count} rows")
358
+
359
+ # OF-Syn configs
360
+ print("\n--- OF-Syn configs ---")
361
+ syn_configs = [
362
+ ("random", "of-syn"),
363
+ ("cross_age", "of-syn-cross-age"),
364
+ ("cross_ethnicity", "of-syn-cross-ethnicity"),
365
+ ("cross_bmi", "of-syn-cross-bmi"),
366
+ ]
367
+ for split_type, config_name in syn_configs:
368
+ result = gen_of_syn(split_type, config_name)
369
+ all_results.update(result)
370
+ for config, splits in result.items():
371
+ for split, count in splits.items():
372
+ print(f" {config}/{split}: {count} rows")
373
+
374
+ # Cross-domain configs
375
+ print("\n--- Cross-domain configs ---")
376
+ crossdomain_configs = [
377
+ ("of-sta-itw-cs", "cs", "staged", "cs", "itw"),
378
+ ("of-sta-itw-cv", "cv", "staged", "cv", "itw"),
379
+ ("of-syn-itw", "random", "syn", "cs", "itw"),
380
+ ]
381
+ for config_name, train_st, train_src, test_st, test_src in crossdomain_configs:
382
+ result = gen_crossdomain(config_name, train_st, train_src, test_st, test_src)
383
+ all_results.update(result)
384
+ for config, splits in result.items():
385
+ for split, count in splits.items():
386
+ print(f" {config}/{split}: {count} rows")
387
+
388
+ # Aggregate configs
389
+ print("\n--- Aggregate configs ---")
390
+ for st in ["cs", "cv"]:
391
+ result = gen_aggregate(st)
392
+ all_results.update(result)
393
+ for config, splits in result.items():
394
+ for split, count in splits.items():
395
+ print(f" {config}/{split}: {count} rows")
396
+
397
+ # Individual dataset configs
398
+ print("\n--- Individual dataset configs ---")
399
+ for ds_name in STAGED_DATASETS:
400
+ result = gen_individual(ds_name)
401
+ all_results.update(result)
402
+ for config, splits in result.items():
403
+ for split, count in splits.items():
404
+ print(f" {config}/{split}: {count} rows")
405
+
406
+ # Deprecated aliases (copy parquet files)
407
+ print("\n--- Deprecated aliases ---")
408
+ for old_name, new_name in DEPRECATED_ALIASES.items():
409
+ result = copy_parquet(new_name, old_name)
410
+ for split, count in result.items():
411
+ print(f" {old_name}/{split}: {count} rows (alias of {new_name})")
412
+ all_results[old_name] = result
413
+
414
+ # Summary
415
+ print(f"\n{'='*60}")
416
+ print(f"Generated parquet files for {len(all_results)} configs")
417
+ total_files = sum(1 for d in PARQUET_DIR.rglob("*.parquet"))
418
+ print(f"Total parquet files: {total_files}")
419
+
420
+ # Print total size
421
+ total_bytes = sum(f.stat().st_size for f in PARQUET_DIR.rglob("*.parquet"))
422
+ print(f"Total size: {total_bytes / 1024 / 1024:.1f} MB")
423
+
424
+ return all_results
425
+
426
+
427
+ if __name__ == "__main__":
428
+ main()
omnifall_builder.py ADDED
@@ -0,0 +1,1192 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """OmniFall: A Unified Benchmark for Staged-to-Wild Fall Detection
2
+
3
+ This dataset builder provides unified access to the OmniFall benchmark, which integrates:
4
+ - OF-Staged (OF-Sta): 8 public staged fall detection datasets (~14h single-view)
5
+ - OF-In-the-Wild (OF-ItW): Curated genuine accident videos from OOPS (~2.7h)
6
+ - OF-Synthetic (OF-Syn): 12,000 synthetic videos generated with Wan 2.2 (~17h)
7
+
8
+ All components share a 16-class activity taxonomy. Staged datasets use classes 0-9,
9
+ while OF-ItW and OF-Syn use the full 0-15 range.
10
+ """
11
+
12
+ import os
13
+ import warnings
14
+ import pandas as pd
15
+ import datasets
16
+ from datasets import (
17
+ BuilderConfig,
18
+ GeneratorBasedBuilder,
19
+ Features,
20
+ Value,
21
+ ClassLabel,
22
+ Sequence,
23
+ SplitGenerator,
24
+ Split,
25
+ Video,
26
+ )
27
+
28
+ _CITATION = """\
29
+ @misc{omnifall,
30
+ title={OmniFall: A Unified Staged-to-Wild Benchmark for Human Fall Detection},
31
+ author={David Schneider and Zdravko Marinov and Rafael Baur and Zeyun Zhong and Rodi D\\\"uger and Rainer Stiefelhagen},
32
+ year={2025},
33
+ eprint={2505.19889},
34
+ archivePrefix={arXiv},
35
+ primaryClass={cs.CV},
36
+ url={https://arxiv.org/abs/2505.19889},
37
+ }
38
+ """
39
+
40
+ _DESCRIPTION = """\
41
+ OmniFall is a comprehensive benchmark that unifies staged, in-the-wild, and synthetic
42
+ fall detection datasets under a common 16-class activity taxonomy.
43
+ """
44
+
45
+ _HOMEPAGE = "https://huggingface.co/datasets/simplexsigil2/omnifall"
46
+ _LICENSE = "cc-by-nc-4.0"
47
+
48
+ # 16 activity classes shared across all components
49
+ _ACTIVITY_LABELS = [
50
+ "walk", # 0
51
+ "fall", # 1
52
+ "fallen", # 2
53
+ "sit_down", # 3
54
+ "sitting", # 4
55
+ "lie_down", # 5
56
+ "lying", # 6
57
+ "stand_up", # 7
58
+ "standing", # 8
59
+ "other", # 9
60
+ "kneel_down", # 10
61
+ "kneeling", # 11
62
+ "squat_down", # 12
63
+ "squatting", # 13
64
+ "crawl", # 14
65
+ "jump", # 15
66
+ ]
67
+
68
+ # Demographic and scene metadata categories (OF-Syn only)
69
+ _AGE_GROUPS = [
70
+ "toddlers_1_4", "children_5_12", "teenagers_13_17",
71
+ "young_adults_18_34", "middle_aged_35_64", "elderly_65_plus",
72
+ ]
73
+ _GENDERS = ["male", "female"]
74
+ _SKIN_TONES = [f"mst{i}" for i in range(1, 11)]
75
+ _ETHNICITIES = ["white", "black", "asian", "hispanic_latino", "aian", "nhpi", "mena"]
76
+ _BMI_BANDS = ["underweight", "normal", "overweight", "obese"]
77
+ _HEIGHT_BANDS = ["short", "avg", "tall"]
78
+ _ENVIRONMENTS = ["indoor", "outdoor"]
79
+ _CAMERA_ELEVATIONS = ["eye", "low", "high", "top"]
80
+ _CAMERA_AZIMUTHS = ["front", "rear", "left", "right"]
81
+ _CAMERA_DISTANCES = ["medium", "far"]
82
+ _CAMERA_SHOTS = ["static_wide", "static_medium_wide"]
83
+ _SPEEDS = ["24fps_rt", "25fps_rt", "30fps_rt", "std_rt"]
84
+
85
+ # The 8 staged datasets
86
+ _STAGED_DATASETS = [
87
+ "caucafall", "cmdfall", "edf", "gmdcsa24",
88
+ "le2i", "mcfd", "occu", "up_fall",
89
+ ]
90
+
91
+ # Label CSV file paths (relative to repo root)
92
+ _STAGED_LABEL_FILES = [f"labels/{name}.csv" for name in [
93
+ "caucafall", "cmdfall", "edf", "GMDCSA24",
94
+ "le2i", "mcfd", "occu", "up_fall",
95
+ ]]
96
+ _ITW_LABEL_FILE = "labels/OOPS.csv"
97
+ _SYN_LABEL_FILE = "labels/of-syn.csv"
98
+ _SYN_VIDEO_ARCHIVE = "data_files/omnifall-synthetic_av1.tar"
99
+
100
+ # OOPS video auto-download configuration
101
+ _OOPS_CACHE_DIR = os.path.join(os.path.expanduser("~"), ".cache", "omnifall", "oops_prepared")
102
+ _OOPS_URL = "https://oops.cs.columbia.edu/data/video_and_anns.tar.gz"
103
+ _OOPS_EXPECTED_VIDEO_COUNT = 818
104
+ _OOPS_MAPPING_FILE = "data_files/oops_video_mapping.csv"
105
+
106
+ _OOPS_LICENSE_TEXT = """\
107
+ ==========================================================================
108
+ OOPS Dataset License Notice
109
+ ==========================================================================
110
+
111
+ The OF-ItW component of OmniFall uses videos from the OOPS dataset.
112
+ The following notice is from the OOPS dataset website
113
+ (https://oops.cs.columbia.edu/data/):
114
+
115
+ "By pressing any of the links above, you acknowledge that we do not
116
+ own the copyright to these videos and that they are solely provided
117
+ for non-commercial research and/or educational purposes. This dataset
118
+ is licensed under a Creative Commons Attribution-NonCommercial-
119
+ ShareAlike 4.0 International License."
120
+
121
+ If you use OF-ItW in your research, please also cite the OOPS paper:
122
+
123
+ @inproceedings{{epstein2020oops,
124
+ title={{Oops! predicting unintentional action in video}},
125
+ author={{Epstein, Dave and Chen, Boyuan and Vondrick, Carl}},
126
+ booktitle={{Proceedings of the IEEE/CVF Conference on Computer
127
+ Vision and Pattern Recognition}},
128
+ pages={{919--929}},
129
+ year={{2020}}
130
+ }}
131
+
132
+ The download will stream ~45GB from the OOPS website and extract {count}
133
+ videos (~2.6GB disk space) to: {cache_dir}
134
+ ==========================================================================
135
+ """
136
+
137
+
138
+ # ---- Feature schema definitions ----
139
+
140
+ def _core_features():
141
+ """7-column schema for staged/OOPS data."""
142
+ return Features({
143
+ "path": Value("string"),
144
+ "label": ClassLabel(num_classes=16, names=_ACTIVITY_LABELS),
145
+ "start": Value("float32"),
146
+ "end": Value("float32"),
147
+ "subject": Value("int32"),
148
+ "cam": Value("int32"),
149
+ "dataset": Value("string"),
150
+ })
151
+
152
+
153
+ def _syn_features():
154
+ """19-column schema for synthetic data (core + demographic/scene metadata)."""
155
+ return Features({
156
+ "path": Value("string"),
157
+ "label": ClassLabel(num_classes=16, names=_ACTIVITY_LABELS),
158
+ "start": Value("float32"),
159
+ "end": Value("float32"),
160
+ "subject": Value("int32"),
161
+ "cam": Value("int32"),
162
+ "dataset": Value("string"),
163
+ # Demographic metadata
164
+ "age_group": ClassLabel(num_classes=6, names=_AGE_GROUPS),
165
+ "gender_presentation": ClassLabel(num_classes=2, names=_GENDERS),
166
+ "monk_skin_tone": ClassLabel(num_classes=10, names=_SKIN_TONES),
167
+ "race_ethnicity_omb": ClassLabel(num_classes=7, names=_ETHNICITIES),
168
+ "bmi_band": ClassLabel(num_classes=4, names=_BMI_BANDS),
169
+ "height_band": ClassLabel(num_classes=3, names=_HEIGHT_BANDS),
170
+ # Scene metadata
171
+ "environment_category": ClassLabel(num_classes=2, names=_ENVIRONMENTS),
172
+ "camera_shot": ClassLabel(num_classes=2, names=_CAMERA_SHOTS),
173
+ "speed": ClassLabel(num_classes=4, names=_SPEEDS),
174
+ "camera_elevation": ClassLabel(num_classes=4, names=_CAMERA_ELEVATIONS),
175
+ "camera_azimuth": ClassLabel(num_classes=4, names=_CAMERA_AZIMUTHS),
176
+ "camera_distance": ClassLabel(num_classes=2, names=_CAMERA_DISTANCES),
177
+ })
178
+
179
+
180
+ def _syn_metadata_features():
181
+ """Feature schema for OF-Syn metadata config (video-level, no temporal segments)."""
182
+ return Features({
183
+ "path": Value("string"),
184
+ "dataset": Value("string"),
185
+ "age_group": ClassLabel(num_classes=6, names=_AGE_GROUPS),
186
+ "gender_presentation": ClassLabel(num_classes=2, names=_GENDERS),
187
+ "monk_skin_tone": ClassLabel(num_classes=10, names=_SKIN_TONES),
188
+ "race_ethnicity_omb": ClassLabel(num_classes=7, names=_ETHNICITIES),
189
+ "bmi_band": ClassLabel(num_classes=4, names=_BMI_BANDS),
190
+ "height_band": ClassLabel(num_classes=3, names=_HEIGHT_BANDS),
191
+ "environment_category": ClassLabel(num_classes=2, names=_ENVIRONMENTS),
192
+ "camera_shot": ClassLabel(num_classes=2, names=_CAMERA_SHOTS),
193
+ "speed": ClassLabel(num_classes=4, names=_SPEEDS),
194
+ "camera_elevation": ClassLabel(num_classes=4, names=_CAMERA_ELEVATIONS),
195
+ "camera_azimuth": ClassLabel(num_classes=4, names=_CAMERA_AZIMUTHS),
196
+ "camera_distance": ClassLabel(num_classes=2, names=_CAMERA_DISTANCES),
197
+ })
198
+
199
+
200
+ def _syn_framewise_features():
201
+ """Feature schema for OF-Syn frame-wise labels (81 labels per video)."""
202
+ return Features({
203
+ "path": Value("string"),
204
+ "dataset": Value("string"),
205
+ "frame_labels": Sequence(
206
+ ClassLabel(num_classes=16, names=_ACTIVITY_LABELS), length=81
207
+ ),
208
+ "age_group": ClassLabel(num_classes=6, names=_AGE_GROUPS),
209
+ "gender_presentation": ClassLabel(num_classes=2, names=_GENDERS),
210
+ "monk_skin_tone": ClassLabel(num_classes=10, names=_SKIN_TONES),
211
+ "race_ethnicity_omb": ClassLabel(num_classes=7, names=_ETHNICITIES),
212
+ "bmi_band": ClassLabel(num_classes=4, names=_BMI_BANDS),
213
+ "height_band": ClassLabel(num_classes=3, names=_HEIGHT_BANDS),
214
+ "environment_category": ClassLabel(num_classes=2, names=_ENVIRONMENTS),
215
+ "camera_shot": ClassLabel(num_classes=2, names=_CAMERA_SHOTS),
216
+ "speed": ClassLabel(num_classes=4, names=_SPEEDS),
217
+ "camera_elevation": ClassLabel(num_classes=4, names=_CAMERA_ELEVATIONS),
218
+ "camera_azimuth": ClassLabel(num_classes=4, names=_CAMERA_AZIMUTHS),
219
+ "camera_distance": ClassLabel(num_classes=2, names=_CAMERA_DISTANCES),
220
+ })
221
+
222
+
223
+ def _paths_only_features():
224
+ """Minimal feature schema for paths-only mode."""
225
+ return Features({"path": Value("string")})
226
+
227
+
228
+ # ---- Config ----
229
+
230
+ class OmniFallConfig(BuilderConfig):
231
+ """BuilderConfig for OmniFall dataset.
232
+
233
+ Args:
234
+ config_type: What kind of data to load.
235
+ "labels" - All labels in a single split (no train/val/test).
236
+ "split" - Train/val/test splits from split CSV files.
237
+ "metadata" - Video-level metadata (OF-Syn only).
238
+ "framewise" - Frame-wise HDF5 labels (OF-Syn only).
239
+ data_source: Which component(s) to load.
240
+ "staged" - 8 staged lab datasets
241
+ "itw" - OOPS in-the-wild
242
+ "syn" - OF-Syn synthetic
243
+ "staged+itw" - Staged and OOPS combined
244
+ Individual dataset names (e.g. "cmdfall") for single datasets.
245
+ split_type: Split strategy.
246
+ "cs" / "cv" for staged/OOPS, "random" / "cross_age" / etc. for synthetic.
247
+ train_source: For cross-domain configs, overrides data_source for train/val.
248
+ test_source: For cross-domain configs, overrides data_source for test.
249
+ test_split_type: For cross-domain configs, overrides split_type for test.
250
+ paths_only: If True, only return video paths (no label merging).
251
+ framewise: If True, load frame-wise labels from HDF5 (OF-Syn only).
252
+ include_video: If True, download and include video files.
253
+ For OF-Syn configs, videos are downloaded from the HF repo.
254
+ For OF-ItW configs, requires oops_video_dir to be set.
255
+ decode_video: If True (default), use Video() feature for auto-decoding.
256
+ If False, return absolute file path as string.
257
+ oops_video_dir: Path to directory containing prepared OOPS videos
258
+ (produced by prepare_oops_videos.py). Required when loading
259
+ OF-ItW configs with include_video=True.
260
+ deprecated_alias_for: If set, this config is a deprecated alias.
261
+ """
262
+
263
+ def __init__(
264
+ self,
265
+ config_type="labels",
266
+ data_source="staged+itw",
267
+ split_type=None,
268
+ train_source=None,
269
+ test_source=None,
270
+ test_split_type=None,
271
+ paths_only=False,
272
+ framewise=False,
273
+ include_video=False,
274
+ decode_video=True,
275
+ oops_video_dir=None,
276
+ deprecated_alias_for=None,
277
+ **kwargs,
278
+ ):
279
+ super().__init__(**kwargs)
280
+ self.config_type = config_type
281
+ self.data_source = data_source
282
+ self.split_type = split_type
283
+ self.train_source = train_source
284
+ self.test_source = test_source
285
+ self.test_split_type = test_split_type
286
+ self.paths_only = paths_only
287
+ self.framewise = framewise
288
+ self.include_video = include_video
289
+ self.decode_video = decode_video
290
+ self.oops_video_dir = oops_video_dir
291
+ self.deprecated_alias_for = deprecated_alias_for
292
+
293
+ @property
294
+ def is_crossdomain(self):
295
+ return self.train_source is not None
296
+
297
+
298
+ def _make_config(name, description, **kwargs):
299
+ """Helper to create a config with consistent version."""
300
+ return OmniFallConfig(
301
+ name=name,
302
+ version=datasets.Version("2.0.0"),
303
+ description=description,
304
+ **kwargs,
305
+ )
306
+
307
+
308
+ # ---- Config definitions ----
309
+
310
+ _LABELS_CONFIGS = [
311
+ _make_config(
312
+ "labels",
313
+ "All staged + OOPS labels (52k segments, 7 columns). Default config.",
314
+ config_type="labels",
315
+ data_source="staged+itw",
316
+ ),
317
+ _make_config(
318
+ "labels-syn",
319
+ "OF-Syn labels with demographic metadata (19k segments, 19 columns).",
320
+ config_type="labels",
321
+ data_source="syn",
322
+ ),
323
+ _make_config(
324
+ "metadata-syn",
325
+ "OF-Syn video-level metadata (12k videos, no temporal segments).",
326
+ config_type="metadata",
327
+ data_source="syn",
328
+ ),
329
+ _make_config(
330
+ "framewise-syn",
331
+ "OF-Syn frame-wise labels from HDF5 (81 labels per video).",
332
+ config_type="framewise",
333
+ data_source="syn",
334
+ framewise=True,
335
+ ),
336
+ ]
337
+
338
+ _AGGREGATE_CONFIGS = [
339
+ _make_config(
340
+ "cs",
341
+ "Cross-subject splits for all staged + OOPS datasets combined.",
342
+ config_type="split",
343
+ data_source="staged+itw",
344
+ split_type="cs",
345
+ ),
346
+ _make_config(
347
+ "cv",
348
+ "Cross-view splits for all staged + OOPS datasets combined.",
349
+ config_type="split",
350
+ data_source="staged+itw",
351
+ split_type="cv",
352
+ ),
353
+ ]
354
+
355
+ _PRIMARY_CONFIGS = [
356
+ _make_config(
357
+ "of-sta-cs",
358
+ "OF-Staged: 8 staged datasets, cross-subject splits.",
359
+ config_type="split",
360
+ data_source="staged",
361
+ split_type="cs",
362
+ ),
363
+ _make_config(
364
+ "of-sta-cv",
365
+ "OF-Staged: 8 staged datasets, cross-view splits.",
366
+ config_type="split",
367
+ data_source="staged",
368
+ split_type="cv",
369
+ ),
370
+ _make_config(
371
+ "of-itw",
372
+ "OF-ItW: OOPS-Fall in-the-wild genuine accidents.",
373
+ config_type="split",
374
+ data_source="itw",
375
+ split_type="cs",
376
+ ),
377
+ _make_config(
378
+ "of-syn",
379
+ "OF-Syn: synthetic, random 80/10/10 split.",
380
+ config_type="split",
381
+ data_source="syn",
382
+ split_type="random",
383
+ ),
384
+ _make_config(
385
+ "of-syn-cross-age",
386
+ "OF-Syn: cross-age split (train: adults, test: children/elderly).",
387
+ config_type="split",
388
+ data_source="syn",
389
+ split_type="cross_age",
390
+ ),
391
+ _make_config(
392
+ "of-syn-cross-ethnicity",
393
+ "OF-Syn: cross-ethnicity split.",
394
+ config_type="split",
395
+ data_source="syn",
396
+ split_type="cross_ethnicity",
397
+ ),
398
+ _make_config(
399
+ "of-syn-cross-bmi",
400
+ "OF-Syn: cross-BMI split (train: normal/underweight, test: obese).",
401
+ config_type="split",
402
+ data_source="syn",
403
+ split_type="cross_bmi",
404
+ ),
405
+ ]
406
+
407
+ _CROSSDOMAIN_CONFIGS = [
408
+ _make_config(
409
+ "of-sta-itw-cs",
410
+ "Cross-domain: train/val on staged CS, test on OOPS.",
411
+ config_type="split",
412
+ data_source="staged",
413
+ split_type="cs",
414
+ train_source="staged",
415
+ test_source="itw",
416
+ test_split_type="cs",
417
+ ),
418
+ _make_config(
419
+ "of-sta-itw-cv",
420
+ "Cross-domain: train/val on staged CV, test on OOPS.",
421
+ config_type="split",
422
+ data_source="staged",
423
+ split_type="cv",
424
+ train_source="staged",
425
+ test_source="itw",
426
+ test_split_type="cv",
427
+ ),
428
+ _make_config(
429
+ "of-syn-itw",
430
+ "Cross-domain: train/val on OF-Syn random, test on OOPS.",
431
+ config_type="split",
432
+ data_source="syn",
433
+ split_type="random",
434
+ train_source="syn",
435
+ test_source="itw",
436
+ test_split_type="cs",
437
+ ),
438
+ ]
439
+
440
+ _INDIVIDUAL_CONFIGS = [
441
+ _make_config(
442
+ name,
443
+ f"{name} dataset with cross-subject splits.",
444
+ config_type="split",
445
+ data_source=name,
446
+ split_type="cs",
447
+ )
448
+ for name in _STAGED_DATASETS
449
+ ]
450
+
451
+ # Deprecated aliases: defined with full correct attributes so _info() works
452
+ # immediately (HF calls _info() during __init__, before any custom init code).
453
+ _DEPRECATED_ALIASES = {
454
+ "cs-staged": "of-sta-cs",
455
+ "cv-staged": "of-sta-cv",
456
+ "cs-staged-wild": "of-sta-itw-cs",
457
+ "cv-staged-wild": "of-sta-itw-cv",
458
+ "OOPS": "of-itw",
459
+ }
460
+
461
+ # Build a lookup from config name to config object
462
+ _ALL_NAMED_CONFIGS = {
463
+ cfg.name: cfg
464
+ for cfg in (
465
+ _LABELS_CONFIGS + _AGGREGATE_CONFIGS + _PRIMARY_CONFIGS
466
+ + _CROSSDOMAIN_CONFIGS + _INDIVIDUAL_CONFIGS
467
+ )
468
+ }
469
+
470
+ _DEPRECATED_CONFIGS = []
471
+ for _old_name, _new_name in _DEPRECATED_ALIASES.items():
472
+ _target = _ALL_NAMED_CONFIGS[_new_name]
473
+ _DEPRECATED_CONFIGS.append(
474
+ _make_config(
475
+ _old_name,
476
+ f"DEPRECATED: Use '{_new_name}' instead.",
477
+ config_type=_target.config_type,
478
+ data_source=_target.data_source,
479
+ split_type=_target.split_type,
480
+ train_source=_target.train_source,
481
+ test_source=_target.test_source,
482
+ test_split_type=_target.test_split_type,
483
+ paths_only=_target.paths_only,
484
+ framewise=_target.framewise,
485
+ include_video=_target.include_video,
486
+ decode_video=_target.decode_video,
487
+ oops_video_dir=_target.oops_video_dir,
488
+ deprecated_alias_for=_new_name,
489
+ )
490
+ )
491
+
492
+
493
+ # ---- Builder ----
494
+
495
+ class OmniFall(GeneratorBasedBuilder):
496
+ """OmniFall unified fall detection benchmark builder."""
497
+
498
+ VERSION = datasets.Version("2.0.0")
499
+ BUILDER_CONFIG_CLASS = OmniFallConfig
500
+
501
+ BUILDER_CONFIGS = (
502
+ _LABELS_CONFIGS
503
+ + _AGGREGATE_CONFIGS
504
+ + _PRIMARY_CONFIGS
505
+ + _CROSSDOMAIN_CONFIGS
506
+ + _INDIVIDUAL_CONFIGS
507
+ + _DEPRECATED_CONFIGS
508
+ )
509
+
510
+ DEFAULT_CONFIG_NAME = "labels"
511
+
512
+ def _info(self):
513
+ """Return dataset metadata and feature schema."""
514
+ cfg = self.config
515
+
516
+ if cfg.config_type == "metadata":
517
+ features = _syn_metadata_features()
518
+ elif cfg.framewise:
519
+ features = _syn_framewise_features()
520
+ elif cfg.paths_only:
521
+ features = _paths_only_features()
522
+ elif cfg.is_crossdomain:
523
+ # Cross-domain configs mix sources, use common 7-col schema
524
+ features = _core_features()
525
+ elif cfg.data_source == "syn":
526
+ features = _syn_features()
527
+ else:
528
+ features = _core_features()
529
+
530
+ if cfg.include_video:
531
+ features["video"] = Video() if cfg.decode_video else Value("string")
532
+
533
+ return datasets.DatasetInfo(
534
+ description=_DESCRIPTION,
535
+ features=features,
536
+ homepage=_HOMEPAGE,
537
+ license=_LICENSE,
538
+ citation=_CITATION,
539
+ )
540
+
541
+ # ---- Split generators ----
542
+
543
+ def _split_generators(self, dl_manager):
544
+ cfg = self.config
545
+
546
+ # Emit deprecation warning
547
+ if cfg.deprecated_alias_for:
548
+ warnings.warn(
549
+ f"Config '{cfg.name}' is deprecated. "
550
+ f"Use '{cfg.deprecated_alias_for}' instead.",
551
+ DeprecationWarning,
552
+ stacklevel=2,
553
+ )
554
+
555
+ # Labels configs: all data in a single "train" split
556
+ if cfg.config_type == "labels":
557
+ return self._labels_splits(cfg, dl_manager)
558
+
559
+ # Metadata config
560
+ if cfg.config_type == "metadata":
561
+ metadata_path = dl_manager.download("videos/metadata.csv")
562
+ return [
563
+ SplitGenerator(
564
+ name=Split.TRAIN,
565
+ gen_kwargs={"mode": "metadata", "metadata_path": metadata_path},
566
+ ),
567
+ ]
568
+
569
+ # Framewise config (no split, all data)
570
+ if cfg.config_type == "framewise":
571
+ archive_path = dl_manager.download_and_extract(
572
+ "data_files/syn_frame_wise_labels.tar.zst"
573
+ )
574
+ metadata_path = dl_manager.download("videos/metadata.csv")
575
+ return [
576
+ SplitGenerator(
577
+ name=Split.TRAIN,
578
+ gen_kwargs={
579
+ "mode": "framewise",
580
+ "hdf5_dir": archive_path,
581
+ "metadata_path": metadata_path,
582
+ "split_file": None,
583
+ },
584
+ ),
585
+ ]
586
+
587
+ # Split configs (train/val/test)
588
+ if cfg.config_type == "split":
589
+ return self._split_config_generators(cfg, dl_manager)
590
+
591
+ raise ValueError(f"Unknown config_type: {cfg.config_type}")
592
+
593
+ def _labels_splits(self, cfg, dl_manager):
594
+ """Generate split generators for labels-type configs."""
595
+ if cfg.data_source == "syn":
596
+ filepath = dl_manager.download(_SYN_LABEL_FILE)
597
+ return [
598
+ SplitGenerator(
599
+ name=Split.TRAIN,
600
+ gen_kwargs={"mode": "csv_direct", "filepath": filepath},
601
+ ),
602
+ ]
603
+ elif cfg.data_source == "staged+itw":
604
+ filepaths = dl_manager.download(_STAGED_LABEL_FILES + [_ITW_LABEL_FILE])
605
+ return [
606
+ SplitGenerator(
607
+ name=Split.TRAIN,
608
+ gen_kwargs={"mode": "csv_multi", "filepaths": filepaths},
609
+ ),
610
+ ]
611
+ else:
612
+ raise ValueError(f"Unsupported data_source for labels: {cfg.data_source}")
613
+
614
+ def _split_config_generators(self, cfg, dl_manager):
615
+ """Generate split generators for train/val/test split configs."""
616
+ if cfg.is_crossdomain:
617
+ return self._crossdomain_splits(cfg, dl_manager)
618
+
619
+ if cfg.data_source == "syn":
620
+ return self._syn_splits(cfg, dl_manager)
621
+ elif cfg.data_source == "staged":
622
+ return self._staged_splits(cfg, dl_manager)
623
+ elif cfg.data_source == "itw":
624
+ return self._itw_splits(cfg, dl_manager)
625
+ elif cfg.data_source == "staged+itw":
626
+ return self._aggregate_splits(cfg, dl_manager)
627
+ elif cfg.data_source in _STAGED_DATASETS:
628
+ return self._individual_splits(cfg, dl_manager)
629
+ else:
630
+ raise ValueError(f"Unknown data_source: {cfg.data_source}")
631
+
632
+ def _staged_split_files(self, split_type, split_name):
633
+ """Return list of split CSV paths for all 8 staged datasets."""
634
+ return [f"splits/{split_type}/{ds}/{split_name}.csv" for ds in _STAGED_DATASETS]
635
+
636
+ def _resolve_oops_video_dir(self, cfg, dl_manager):
637
+ """Resolve the OOPS video directory for OF-ItW configs.
638
+
639
+ Priority:
640
+ 1. If include_video is False, return None.
641
+ 2. If oops_video_dir is explicitly provided, validate and return it.
642
+ 3. If cache exists with expected video count, return cache path.
643
+ 4. Otherwise, prompt for license consent and auto-download.
644
+ """
645
+ if not cfg.include_video:
646
+ return None
647
+
648
+ # User explicitly provided a directory
649
+ if cfg.oops_video_dir:
650
+ video_dir = os.path.abspath(cfg.oops_video_dir)
651
+ if not os.path.isdir(video_dir):
652
+ raise FileNotFoundError(
653
+ f"oops_video_dir does not exist: {video_dir}\n"
654
+ "Run prepare_oops_videos.py to prepare OOPS videos first."
655
+ )
656
+ return video_dir
657
+
658
+ # Check cache
659
+ cache_dir = _OOPS_CACHE_DIR
660
+ if self._oops_cache_is_valid(cache_dir):
661
+ return cache_dir
662
+
663
+ # Auto-download: prompt for consent and extract
664
+ return self._auto_prepare_oops(cache_dir, dl_manager)
665
+
666
+ def _oops_cache_is_valid(self, cache_dir):
667
+ """Check if the OOPS video cache contains the expected number of videos."""
668
+ falls_dir = os.path.join(cache_dir, "falls")
669
+ if not os.path.isdir(falls_dir):
670
+ return False
671
+ mp4_count = sum(1 for f in os.listdir(falls_dir) if f.endswith(".mp4"))
672
+ if mp4_count >= _OOPS_EXPECTED_VIDEO_COUNT:
673
+ return True
674
+ if mp4_count > 0:
675
+ warnings.warn(
676
+ f"OOPS cache at {cache_dir} contains {mp4_count}/{_OOPS_EXPECTED_VIDEO_COUNT} "
677
+ f"videos (incomplete). Will re-download."
678
+ )
679
+ return False
680
+
681
+ def _auto_prepare_oops(self, cache_dir, dl_manager):
682
+ """Download and prepare OOPS videos with interactive license consent."""
683
+ import csv
684
+ import subprocess
685
+ import tarfile
686
+
687
+ # Print license and get consent
688
+ print(_OOPS_LICENSE_TEXT.format(
689
+ count=_OOPS_EXPECTED_VIDEO_COUNT, cache_dir=cache_dir,
690
+ ))
691
+ try:
692
+ response = input('Type "YES" to accept the license and begin download: ')
693
+ except EOFError:
694
+ raise RuntimeError(
695
+ "Cannot prompt for OOPS license consent in non-interactive mode.\n"
696
+ "Either run prepare_oops_videos.py manually and pass oops_video_dir,\n"
697
+ "or run this script in an interactive terminal."
698
+ )
699
+
700
+ if response.strip() != "YES":
701
+ raise RuntimeError(
702
+ "OOPS license not accepted. To load OF-ItW with videos, either:\n"
703
+ "1. Run again and type YES when prompted, or\n"
704
+ "2. Run prepare_oops_videos.py manually and pass oops_video_dir."
705
+ )
706
+
707
+ # Download the mapping file from the HF repo
708
+ mapping_path = dl_manager.download(_OOPS_MAPPING_FILE)
709
+ mapping = {}
710
+ with open(mapping_path) as f:
711
+ reader = csv.DictReader(f)
712
+ for row in reader:
713
+ mapping[row["oops_path"]] = row["itw_path"]
714
+
715
+ # Create output directory
716
+ os.makedirs(os.path.join(cache_dir, "falls"), exist_ok=True)
717
+
718
+ # Extract videos
719
+ found = self._extract_oops_videos(_OOPS_URL, mapping, cache_dir)
720
+
721
+ if found == 0:
722
+ raise RuntimeError(
723
+ "Failed to extract any OOPS videos. Check network connectivity "
724
+ "and try again, or use prepare_oops_videos.py with a local archive."
725
+ )
726
+
727
+ if found < _OOPS_EXPECTED_VIDEO_COUNT:
728
+ warnings.warn(
729
+ f"Only extracted {found}/{_OOPS_EXPECTED_VIDEO_COUNT} OOPS videos. "
730
+ f"Some videos may be missing from the archive."
731
+ )
732
+
733
+ return cache_dir
734
+
735
+ def _extract_oops_videos(self, source, mapping, output_dir):
736
+ """Stream through the OOPS archive and extract matching videos."""
737
+ import subprocess
738
+ import tarfile
739
+
740
+ total = len(mapping)
741
+ print(f"Extracting {total} videos from OOPS archive...")
742
+ print("(Streaming ~45GB from web, no local disk space needed for archive)")
743
+ print("(This may take 30-60 minutes depending on connection speed)")
744
+
745
+ os.makedirs(os.path.join(output_dir, "falls"), exist_ok=True)
746
+
747
+ found = 0
748
+ remaining = set(mapping.keys())
749
+
750
+ cmd = f'curl -sL "{source}" | tar -xzf - --to-stdout "oops_dataset/video.tar.gz"'
751
+ proc = subprocess.Popen(
752
+ cmd, shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE,
753
+ )
754
+
755
+ try:
756
+ with tarfile.open(fileobj=proc.stdout, mode="r|gz") as tar:
757
+ for member in tar:
758
+ if not remaining:
759
+ break
760
+ if member.name in remaining:
761
+ itw_path = mapping[member.name]
762
+ out_path = os.path.join(output_dir, itw_path)
763
+
764
+ f = tar.extractfile(member)
765
+ if f is not None:
766
+ with open(out_path, "wb") as out_f:
767
+ while True:
768
+ chunk = f.read(1024 * 1024)
769
+ if not chunk:
770
+ break
771
+ out_f.write(chunk)
772
+ f.close()
773
+ found += 1
774
+ remaining.discard(member.name)
775
+ if found % 50 == 0:
776
+ print(f" Extracted {found}/{total} videos...")
777
+ finally:
778
+ proc.stdout.close()
779
+ proc.wait()
780
+
781
+ print(f"Extracted {found}/{total} videos to {output_dir}")
782
+ if remaining:
783
+ print(f"WARNING: {len(remaining)} videos not found in archive.")
784
+
785
+ return found
786
+
787
+ def _make_split_merge_generators(self, split_files_per_split, label_files,
788
+ dl_manager, video_dir=None):
789
+ """Helper to create train/val/test SplitGenerators for split_merge mode.
790
+
791
+ Args:
792
+ split_files_per_split: dict mapping split name to list of relative paths.
793
+ label_files: list of relative label file paths.
794
+ dl_manager: download manager for resolving paths.
795
+ video_dir: path to extracted video directory, or None.
796
+ """
797
+ resolved_labels = dl_manager.download(label_files)
798
+ return [
799
+ SplitGenerator(
800
+ name=split_enum,
801
+ gen_kwargs={
802
+ "mode": "split_merge",
803
+ "split_files": dl_manager.download(split_files_per_split[csv_name]),
804
+ "label_files": resolved_labels,
805
+ "video_dir": video_dir,
806
+ },
807
+ )
808
+ for split_enum, csv_name in [
809
+ (Split.TRAIN, "train"),
810
+ (Split.VALIDATION, "val"),
811
+ (Split.TEST, "test"),
812
+ ]
813
+ ]
814
+
815
+ def _staged_splits(self, cfg, dl_manager):
816
+ """OF-Staged: 8 datasets combined with CS or CV splits."""
817
+ st = cfg.split_type
818
+ return self._make_split_merge_generators(
819
+ {sn: self._staged_split_files(st, sn) for sn in ("train", "val", "test")},
820
+ _STAGED_LABEL_FILES,
821
+ dl_manager,
822
+ )
823
+
824
+ def _itw_splits(self, cfg, dl_manager):
825
+ """OF-ItW: OOPS-Fall (CS=CV identical)."""
826
+ st = cfg.split_type
827
+ video_dir = self._resolve_oops_video_dir(cfg, dl_manager)
828
+ return self._make_split_merge_generators(
829
+ {sn: [f"splits/{st}/OOPS/{sn}.csv"] for sn in ("train", "val", "test")},
830
+ [_ITW_LABEL_FILE],
831
+ dl_manager,
832
+ video_dir=video_dir,
833
+ )
834
+
835
+ def _aggregate_splits(self, cfg, dl_manager):
836
+ """All staged + OOPS combined (cs or cv)."""
837
+ st = cfg.split_type
838
+ all_labels = _STAGED_LABEL_FILES + [_ITW_LABEL_FILE]
839
+ return self._make_split_merge_generators(
840
+ {sn: self._staged_split_files(st, sn) + [f"splits/{st}/OOPS/{sn}.csv"]
841
+ for sn in ("train", "val", "test")},
842
+ all_labels,
843
+ dl_manager,
844
+ )
845
+
846
+ def _individual_splits(self, cfg, dl_manager):
847
+ """Individual dataset with CS splits."""
848
+ ds_name = cfg.data_source
849
+ label_file_map = {
850
+ "caucafall": "labels/caucafall.csv",
851
+ "cmdfall": "labels/cmdfall.csv",
852
+ "edf": "labels/edf.csv",
853
+ "gmdcsa24": "labels/GMDCSA24.csv",
854
+ "le2i": "labels/le2i.csv",
855
+ "mcfd": "labels/mcfd.csv",
856
+ "occu": "labels/occu.csv",
857
+ "up_fall": "labels/up_fall.csv",
858
+ }
859
+ label_file = label_file_map[ds_name]
860
+ st = cfg.split_type
861
+ return self._make_split_merge_generators(
862
+ {sn: [f"splits/{st}/{ds_name}/{sn}.csv"] for sn in ("train", "val", "test")},
863
+ [label_file],
864
+ dl_manager,
865
+ )
866
+
867
+ def _syn_splits(self, cfg, dl_manager):
868
+ """OF-Syn split strategies."""
869
+ st = cfg.split_type
870
+ split_dir = f"splits/syn/{st}"
871
+
872
+ # Download video archive if requested
873
+ video_dir = None
874
+ if cfg.include_video:
875
+ video_dir = dl_manager.download_and_extract(_SYN_VIDEO_ARCHIVE)
876
+
877
+ if cfg.framewise:
878
+ archive_path = dl_manager.download_and_extract(
879
+ "data_files/syn_frame_wise_labels.tar.zst"
880
+ )
881
+ metadata_path = dl_manager.download("videos/metadata.csv")
882
+ split_files = dl_manager.download(
883
+ {sn: f"{split_dir}/{sn}.csv" for sn in ("train", "val", "test")}
884
+ )
885
+ return [
886
+ SplitGenerator(
887
+ name=split_enum,
888
+ gen_kwargs={
889
+ "mode": "framewise",
890
+ "hdf5_dir": archive_path,
891
+ "metadata_path": metadata_path,
892
+ "split_file": split_files[csv_name],
893
+ },
894
+ )
895
+ for split_enum, csv_name in [
896
+ (Split.TRAIN, "train"),
897
+ (Split.VALIDATION, "val"),
898
+ (Split.TEST, "test"),
899
+ ]
900
+ ]
901
+
902
+ if cfg.paths_only:
903
+ split_files = dl_manager.download(
904
+ {sn: f"{split_dir}/{sn}.csv" for sn in ("train", "val", "test")}
905
+ )
906
+ return [
907
+ SplitGenerator(
908
+ name=split_enum,
909
+ gen_kwargs={
910
+ "mode": "paths_only",
911
+ "split_file": split_files[csv_name],
912
+ },
913
+ )
914
+ for split_enum, csv_name in [
915
+ (Split.TRAIN, "train"),
916
+ (Split.VALIDATION, "val"),
917
+ (Split.TEST, "test"),
918
+ ]
919
+ ]
920
+
921
+ return self._make_split_merge_generators(
922
+ {sn: [f"{split_dir}/{sn}.csv"] for sn in ("train", "val", "test")},
923
+ [_SYN_LABEL_FILE],
924
+ dl_manager,
925
+ video_dir=video_dir,
926
+ )
927
+
928
+ def _crossdomain_splits(self, cfg, dl_manager):
929
+ """Cross-domain configs: train/val from one source, test from another."""
930
+ train_st = cfg.split_type
931
+ test_st = cfg.test_split_type or "cs"
932
+
933
+ # Resolve video directories for each source
934
+ train_video_dir = None
935
+ if cfg.include_video and cfg.train_source == "syn":
936
+ train_video_dir = dl_manager.download_and_extract(_SYN_VIDEO_ARCHIVE)
937
+
938
+ test_video_dir = None
939
+ if cfg.include_video and cfg.test_source == "itw":
940
+ test_video_dir = self._resolve_oops_video_dir(cfg, dl_manager)
941
+
942
+ # Determine train/val files and labels
943
+ if cfg.train_source == "staged":
944
+ train_split_files = {
945
+ sn: self._staged_split_files(train_st, sn)
946
+ for sn in ("train", "val")
947
+ }
948
+ train_labels = _STAGED_LABEL_FILES
949
+ elif cfg.train_source == "syn":
950
+ train_split_files = {
951
+ sn: [f"splits/syn/{train_st}/{sn}.csv"]
952
+ for sn in ("train", "val")
953
+ }
954
+ train_labels = [_SYN_LABEL_FILE]
955
+ else:
956
+ raise ValueError(f"Unsupported train_source: {cfg.train_source}")
957
+
958
+ # Determine test files and labels
959
+ if cfg.test_source == "itw":
960
+ test_split_files = [f"splits/{test_st}/OOPS/test.csv"]
961
+ test_labels = [_ITW_LABEL_FILE]
962
+ else:
963
+ raise ValueError(f"Unsupported test_source: {cfg.test_source}")
964
+
965
+ # Download all paths
966
+ resolved_train_labels = dl_manager.download(train_labels)
967
+ resolved_test_labels = dl_manager.download(test_labels)
968
+ resolved_test_splits = dl_manager.download(test_split_files)
969
+
970
+ return [
971
+ SplitGenerator(
972
+ name=Split.TRAIN,
973
+ gen_kwargs={
974
+ "mode": "split_merge",
975
+ "split_files": dl_manager.download(train_split_files["train"]),
976
+ "label_files": resolved_train_labels,
977
+ "video_dir": train_video_dir,
978
+ },
979
+ ),
980
+ SplitGenerator(
981
+ name=Split.VALIDATION,
982
+ gen_kwargs={
983
+ "mode": "split_merge",
984
+ "split_files": dl_manager.download(train_split_files["val"]),
985
+ "label_files": resolved_train_labels,
986
+ "video_dir": train_video_dir,
987
+ },
988
+ ),
989
+ SplitGenerator(
990
+ name=Split.TEST,
991
+ gen_kwargs={
992
+ "mode": "split_merge",
993
+ "split_files": resolved_test_splits,
994
+ "label_files": resolved_test_labels,
995
+ "video_dir": test_video_dir,
996
+ },
997
+ ),
998
+ ]
999
+
1000
+ # ---- Example generators ----
1001
+
1002
+ def _generate_examples(self, mode, **kwargs):
1003
+ """Dispatch to the appropriate generator based on mode."""
1004
+ if mode == "csv_direct":
1005
+ yield from self._gen_csv_direct(**kwargs)
1006
+ elif mode == "csv_multi":
1007
+ yield from self._gen_csv_multi(**kwargs)
1008
+ elif mode == "split_merge":
1009
+ yield from self._gen_split_merge(**kwargs)
1010
+ elif mode == "metadata":
1011
+ yield from self._gen_metadata(**kwargs)
1012
+ elif mode == "framewise":
1013
+ yield from self._gen_framewise(**kwargs)
1014
+ elif mode == "paths_only":
1015
+ yield from self._gen_paths_only(**kwargs)
1016
+ else:
1017
+ raise ValueError(f"Unknown generation mode: {mode}")
1018
+
1019
+ def _gen_csv_direct(self, filepath):
1020
+ """Load a single CSV file directly."""
1021
+ df = pd.read_csv(filepath)
1022
+ for idx, row in df.iterrows():
1023
+ yield idx, self._row_to_example(row)
1024
+
1025
+ def _gen_csv_multi(self, filepaths):
1026
+ """Load and concatenate multiple CSV files."""
1027
+ dfs = [pd.read_csv(fp) for fp in filepaths]
1028
+ df = pd.concat(dfs, ignore_index=True)
1029
+ for idx, row in df.iterrows():
1030
+ yield idx, self._row_to_example(row)
1031
+
1032
+ def _gen_split_merge(self, split_files, label_files, video_dir=None):
1033
+ """Load split paths, merge with labels, yield examples."""
1034
+ split_dfs = [pd.read_csv(sf) for sf in split_files]
1035
+ split_df = pd.concat(split_dfs, ignore_index=True)
1036
+
1037
+ if self.config.paths_only:
1038
+ for idx, row in split_df.iterrows():
1039
+ yield idx, {"path": row["path"]}
1040
+ return
1041
+
1042
+ label_dfs = [pd.read_csv(lf) for lf in label_files]
1043
+ labels_df = pd.concat(label_dfs, ignore_index=True)
1044
+
1045
+ merged_df = pd.merge(split_df, labels_df, on="path", how="left")
1046
+
1047
+ for idx, row in merged_df.iterrows():
1048
+ example = self._row_to_example(row)
1049
+ if video_dir is not None:
1050
+ example["video"] = os.path.join(video_dir, row["path"] + ".mp4")
1051
+ yield idx, example
1052
+
1053
+ def _gen_metadata(self, metadata_path):
1054
+ """Load OF-Syn video-level metadata."""
1055
+ df = pd.read_csv(metadata_path)
1056
+ metadata_cols = [
1057
+ "path", "age_group", "gender_presentation", "monk_skin_tone",
1058
+ "race_ethnicity_omb", "bmi_band", "height_band",
1059
+ "environment_category", "camera_shot", "speed",
1060
+ "camera_elevation", "camera_azimuth", "camera_distance",
1061
+ ]
1062
+ available_cols = [c for c in metadata_cols if c in df.columns]
1063
+ df = df[available_cols].drop_duplicates(subset=["path"]).reset_index(drop=True)
1064
+ df["dataset"] = "of-syn"
1065
+
1066
+ for idx, row in df.iterrows():
1067
+ yield idx, self._row_to_example(row)
1068
+
1069
+ def _gen_framewise(self, hdf5_dir, metadata_path, split_file=None):
1070
+ """Load frame-wise labels from HDF5 files with metadata."""
1071
+ import h5py
1072
+ import tarfile
1073
+ from pathlib import Path
1074
+
1075
+ metadata_df = pd.read_csv(metadata_path)
1076
+
1077
+ valid_paths = None
1078
+ if split_file is not None:
1079
+ split_df = pd.read_csv(split_file)
1080
+ valid_paths = set(split_df["path"].tolist())
1081
+
1082
+ hdf5_path = Path(hdf5_dir)
1083
+ metadata_fields = [
1084
+ "age_group", "gender_presentation", "monk_skin_tone",
1085
+ "race_ethnicity_omb", "bmi_band", "height_band",
1086
+ "environment_category", "camera_shot", "speed",
1087
+ "camera_elevation", "camera_azimuth", "camera_distance",
1088
+ ]
1089
+
1090
+ if hdf5_path.is_file() and (
1091
+ hdf5_path.suffix == ".tar" or tarfile.is_tarfile(str(hdf5_path))
1092
+ ):
1093
+ idx = 0
1094
+ with tarfile.open(hdf5_path, "r") as tar:
1095
+ for member in tar.getmembers():
1096
+ if not member.name.endswith(".h5"):
1097
+ continue
1098
+ video_path = member.name.lstrip("./").replace(".h5", "")
1099
+ if valid_paths is not None and video_path not in valid_paths:
1100
+ continue
1101
+ try:
1102
+ h5_file = tar.extractfile(member)
1103
+ if h5_file is None:
1104
+ continue
1105
+ import tempfile
1106
+ with tempfile.NamedTemporaryFile(suffix=".h5", delete=True) as tmp:
1107
+ tmp.write(h5_file.read())
1108
+ tmp.flush()
1109
+ with h5py.File(tmp.name, "r") as f:
1110
+ frame_labels = f["label_indices"][:].tolist()
1111
+ video_metadata = metadata_df[metadata_df["path"] == video_path]
1112
+ if len(video_metadata) == 0:
1113
+ continue
1114
+ video_meta = video_metadata.iloc[0]
1115
+ example = {
1116
+ "path": video_path,
1117
+ "dataset": "of-syn",
1118
+ "frame_labels": frame_labels,
1119
+ }
1120
+ for field in metadata_fields:
1121
+ if field in video_meta and pd.notna(video_meta[field]):
1122
+ example[field] = str(video_meta[field])
1123
+ yield idx, example
1124
+ idx += 1
1125
+ except Exception as e:
1126
+ warnings.warn(f"Failed to process {member.name}: {e}")
1127
+ continue
1128
+ else:
1129
+ hdf5_files = sorted(hdf5_path.glob("**/*.h5"))
1130
+ idx = 0
1131
+ for h5_file_path in hdf5_files:
1132
+ relative_path = h5_file_path.relative_to(hdf5_path)
1133
+ video_path = str(relative_path.with_suffix(""))
1134
+ if valid_paths is not None and video_path not in valid_paths:
1135
+ continue
1136
+ try:
1137
+ with h5py.File(h5_file_path, "r") as f:
1138
+ frame_labels = f["label_indices"][:].tolist()
1139
+ video_metadata = metadata_df[metadata_df["path"] == video_path]
1140
+ if len(video_metadata) == 0:
1141
+ continue
1142
+ video_meta = video_metadata.iloc[0]
1143
+ example = {
1144
+ "path": video_path,
1145
+ "dataset": "of-syn",
1146
+ "frame_labels": frame_labels,
1147
+ }
1148
+ for field in metadata_fields:
1149
+ if field in video_meta and pd.notna(video_meta[field]):
1150
+ example[field] = str(video_meta[field])
1151
+ yield idx, example
1152
+ idx += 1
1153
+ except Exception as e:
1154
+ warnings.warn(f"Failed to process {h5_file_path}: {e}")
1155
+ continue
1156
+
1157
+ def _gen_paths_only(self, split_file):
1158
+ """Load paths only from a split file."""
1159
+ df = pd.read_csv(split_file)
1160
+ for idx, row in df.iterrows():
1161
+ yield idx, {"path": row["path"]}
1162
+
1163
+ def _row_to_example(self, row):
1164
+ """Convert a DataFrame row to a typed example dict.
1165
+
1166
+ Only includes fields present in the row. HuggingFace's Features.encode_example()
1167
+ will ignore extra fields and fill missing optional fields.
1168
+ """
1169
+ example = {"path": str(row["path"])}
1170
+
1171
+ # Core temporal fields
1172
+ for field, dtype in [
1173
+ ("label", int), ("start", float), ("end", float),
1174
+ ("subject", int), ("cam", int),
1175
+ ]:
1176
+ if field in row.index and pd.notna(row[field]):
1177
+ example[field] = dtype(row[field])
1178
+
1179
+ if "dataset" in row.index and pd.notna(row["dataset"]):
1180
+ example["dataset"] = str(row["dataset"])
1181
+
1182
+ # Demographic and scene metadata (present only for syn data)
1183
+ for field in [
1184
+ "age_group", "gender_presentation", "monk_skin_tone",
1185
+ "race_ethnicity_omb", "bmi_band", "height_band",
1186
+ "environment_category", "camera_shot", "speed",
1187
+ "camera_elevation", "camera_azimuth", "camera_distance",
1188
+ ]:
1189
+ if field in row.index and pd.notna(row[field]):
1190
+ example[field] = str(row[field])
1191
+
1192
+ return example
parquet/OOPS/test-00000-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:bd496c163abcb617430940104ee715f6acb5ed6dd2aea7af34b9f3e057bb56e7
3
+ size 46279
parquet/OOPS/train-00000-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:439de7cca631ff21f27fa266407b0d7912a73c91e4d777fcc27f02690d65aa2c
3
+ size 17402
parquet/OOPS/validation-00000-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:cf764c5161cff04ad4183ecbc344c202bee4bd7d0bf4f6e1d019e78be843d204
3
+ size 11006
parquet/caucafall/test-00000-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:56f36d14fe193fb9fa001df09afe46f56d232d13864105d863ee585b37eb1d60
3
+ size 4872
parquet/caucafall/train-00000-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9d7165eff7194a7d36abc1f99500202aca8985bd502bbd5f41a9332d1a2cbfb7
3
+ size 6448
parquet/caucafall/validation-00000-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:90390ae73ad1c0ddc0ac92914e747f5eac833729bf98d6aba5c213a763c07237
3
+ size 4647
parquet/cmdfall/test-00000-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5c0f657c673bd860e3576eff27d41f2ca5921c72707c12451fb2eea708384ea7
3
+ size 52220
parquet/cmdfall/train-00000-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6dbaf81afb7d332c7ca6436c4d33b26e608fe864ae1c3ad34990958e52588031
3
+ size 93589
parquet/cmdfall/validation-00000-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6c070ec7048a682fda4c7425aaccf9a2b4f96ee8e4ca7fe267e5acf6ca6c6712
3
+ size 18969
parquet/cs-staged-wild/test-00000-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:bd496c163abcb617430940104ee715f6acb5ed6dd2aea7af34b9f3e057bb56e7
3
+ size 46279
parquet/cs-staged-wild/train-00000-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d60e214dbb15b7bdaa976e03a5443168d0fe6d0f36d92260adfbb330268ff717
3
+ size 157509
parquet/cs-staged-wild/validation-00000-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9c8d927168548653d2fc2198cd662dfc4919316f3e20bc713b52c2cd751f250d
3
+ size 24321
parquet/cs-staged/test-00000-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:de99a15c53a7d10801b61ea4301a6452d2e94d9f74336d0cf716016d0a420491
3
+ size 90482
parquet/cs-staged/train-00000-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d60e214dbb15b7bdaa976e03a5443168d0fe6d0f36d92260adfbb330268ff717
3
+ size 157509
parquet/cs-staged/validation-00000-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9c8d927168548653d2fc2198cd662dfc4919316f3e20bc713b52c2cd751f250d
3
+ size 24321
parquet/cs/test-00000-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0783406d42caea66d7d75f75e0a3108d2b17523aaee7bc962590a54cbce6a786
3
+ size 139074
parquet/cs/train-00000-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0fa18b6a07176eafdc27765f8a3d1a16f3f07333f3f25e78b4613f9fd1a76e17
3
+ size 171245
parquet/cs/validation-00000-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:529bddf37e746efddf77068ee9a577d8d2983754e432c9ca5b0df78bdbda9cc2
3
+ size 33666
parquet/cv-staged-wild/test-00000-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:bd496c163abcb617430940104ee715f6acb5ed6dd2aea7af34b9f3e057bb56e7
3
+ size 46279
parquet/cv-staged-wild/train-00000-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9b3355d27e4e9665756b3da472ea080192e681e595a1f318dda6739bf1a4ff01
3
+ size 113467
parquet/cv-staged-wild/validation-00000-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d47789598ae465a3f9284760d17eb9073d8f94ec667280c5e22830cad463f88b
3
+ size 103804
parquet/cv-staged/test-00000-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:00d48d217da8eb98a041fd0a86e9b51703c803dd5c3cc35bad625a8d0ee14c26
3
+ size 180622
parquet/cv-staged/train-00000-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9b3355d27e4e9665756b3da472ea080192e681e595a1f318dda6739bf1a4ff01
3
+ size 113467
parquet/cv-staged/validation-00000-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d47789598ae465a3f9284760d17eb9073d8f94ec667280c5e22830cad463f88b
3
+ size 103804
parquet/cv/test-00000-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c1ddcfb78f6334bd2c59f58b9dcf6d22b4ea94223d50240f82d258a8129ace1a
3
+ size 225925
parquet/cv/train-00000-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b6829948633305c80ee633ae8240842670354685633e2a8e8f4a2fed05e32797
3
+ size 127828
parquet/cv/validation-00000-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d4880d7837978e7fee9981de3bf657e32bb4857014f03a579cd0e73423902819
3
+ size 111988
parquet/edf/test-00000-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c322cdf427680f6840e61d0dce9e011867f6be5dded25fc5898abecad86534b4
3
+ size 5693
parquet/edf/train-00000-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e2839d6079515f3a3ac5acf5072df97a94ea1c04590b2b8027db2a24b9d95545
3
+ size 7704
parquet/edf/validation-00000-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b8c2698f76a52a889c4ca02cb9ac59a0d42af6269db183a0f0026947ee4500f4
3
+ size 5232
parquet/gmdcsa24/test-00000-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d13a0250ad70fdfad33fedb2f6d77add15f0bf286fa3e502938de9b3883031fc
3
+ size 5409
parquet/gmdcsa24/train-00000-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6339d18286b632e7cc9764497c732088e20348674c36d129d33cb29fb3412622
3
+ size 6747
parquet/gmdcsa24/validation-00000-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4c5a6d927e436f2446c279c2e098de1ad3dbfe0646f3d1e5808f988ea58ea968
3
+ size 5982
parquet/labels-syn/train-00000-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:331c7543e7131935b7d5211dc820543d20bad2fb205bf00702fdb79021c12cac
3
+ size 225449
parquet/labels/train-00000-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a5169d3e95b26080527265516d415d068a83c3dea4cddca8d0828a8d2345fd3a
3
+ size 309792
parquet/le2i/test-00000-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9c0644ac5bdfa306db7211ba110f99c6bbe3c07074a5c89159c6ef24338a4bba
3
+ size 6708
parquet/le2i/train-00000-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:736ab4c0f5715f5ad6e319582b04a46bcdc32c4f8377c8cd9e8442aadd9e97e0
3
+ size 11895
parquet/le2i/validation-00000-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7d2a340314d8d07d0ae6fcdcbeee69d5344b11bc0942c22e0dd2ab969f55dc39
3
+ size 5360
parquet/mcfd/train-00000-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:87d258476d2d3280dcac960f7c9ee6b3e29210be2842600457ec448305ffdfed
3
+ size 18819
parquet/metadata-syn/train-00000-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e3e4cd9cf3ac23a087f17621d129f8910fccbfdf81fe965cb415bc6075639175
3
+ size 112932
parquet/occu/test-00000-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c177a20d5c6c78bda80ffc34bf181311f52dc47905786088f02a7627ea708094
3
+ size 5450
parquet/occu/train-00000-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f62589b60ee5096c0daabeef98614722ad7d06e78e8fd18cc3dd0091e198c977
3
+ size 7538
parquet/occu/validation-00000-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:fe8e026e7699b323385583f855f846811bc805dc86775f9a6051444f3f639eeb
3
+ size 5365
parquet/of-itw/test-00000-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:bd496c163abcb617430940104ee715f6acb5ed6dd2aea7af34b9f3e057bb56e7
3
+ size 46279
parquet/of-itw/train-00000-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:439de7cca631ff21f27fa266407b0d7912a73c91e4d777fcc27f02690d65aa2c
3
+ size 17402