Datasets:
Commit ·
b1da356
1
Parent(s): d9f242f
Added OF-ITW download logic
Browse files- README.md +34 -5
- data_files/oops_video_mapping.csv +0 -0
- omnifall.py +41 -9
- prepare_oops_videos.py +177 -0
README.md
CHANGED
|
@@ -75,6 +75,8 @@ The repository is organized as follows:
|
|
| 75 |
- `videos/metadata.csv` - OF-Syn video-level metadata (12,000 videos)
|
| 76 |
- `data_files/omnifall-synthetic_av1.tar` - OF-Syn video archive (12,000 AV1-encoded MP4s)
|
| 77 |
- `data_files/syn_frame_wise_labels.tar.zst` - OF-Syn frame-wise HDF5 labels
|
|
|
|
|
|
|
| 78 |
|
| 79 |
### Label Format
|
| 80 |
|
|
@@ -142,6 +144,8 @@ All configurations are defined in the `omnifall.py` dataset builder and loaded v
|
|
| 142 |
### OF-ItW Config
|
| 143 |
- `of-itw`: OOPS-Fall in-the-wild genuine accidents
|
| 144 |
|
|
|
|
|
|
|
| 145 |
### OF-Syn Configs
|
| 146 |
- `of-syn`: Fixed randomized 80/10/10 split
|
| 147 |
- `of-syn-cross-age`: Cross-age split (train: adults, test: children/elderly)
|
|
@@ -208,7 +212,7 @@ syn_labels = load_dataset("simplexsigil2/omnifall", "labels-syn")["train"]
|
|
| 208 |
|
| 209 |
### Loading OF-Syn videos
|
| 210 |
|
| 211 |
-
OF-Syn configs support `include_video=True` to download and include the video files (~9 GB).
|
| 212 |
By default, videos are returned as decoded `Video()` objects. Set `decode_video=False` to get file paths instead.
|
| 213 |
|
| 214 |
```python
|
|
@@ -226,13 +230,38 @@ ds = load_dataset("simplexsigil2/omnifall", "of-syn",
|
|
| 226 |
sample = ds["train"][0]
|
| 227 |
print(sample["video"]) # "/path/to/cached/fall/fall_ch_001.mp4"
|
| 228 |
|
| 229 |
-
# Cross-domain with video: train/val
|
| 230 |
ds = load_dataset("simplexsigil2/omnifall", "of-syn-itw",
|
| 231 |
-
include_video=True, decode_video=False,
|
| 232 |
-
|
| 233 |
-
|
|
|
|
|
|
|
| 234 |
```
|
| 235 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 236 |
## Label definitions
|
| 237 |
|
| 238 |
In this section we provide additional information about the labelling process to provide as much transparency as possible.
|
|
|
|
| 75 |
- `videos/metadata.csv` - OF-Syn video-level metadata (12,000 videos)
|
| 76 |
- `data_files/omnifall-synthetic_av1.tar` - OF-Syn video archive (12,000 AV1-encoded MP4s)
|
| 77 |
- `data_files/syn_frame_wise_labels.tar.zst` - OF-Syn frame-wise HDF5 labels
|
| 78 |
+
- `data_files/oops_video_mapping.csv` - Mapping from OOPS original filenames to OF-ItW sanitized names
|
| 79 |
+
- `prepare_oops_videos.py` - Script to extract OOPS videos for OF-ItW (streams from source, no 45GB download needed)
|
| 80 |
|
| 81 |
### Label Format
|
| 82 |
|
|
|
|
| 144 |
### OF-ItW Config
|
| 145 |
- `of-itw`: OOPS-Fall in-the-wild genuine accidents
|
| 146 |
|
| 147 |
+
OF-ItW supports optional video loading via `include_video=True` with `oops_video_dir` (see examples below). Videos are not hosted here due to licensing; run `prepare_oops_videos.py` to download them from the [original OOPS source](https://oops.cs.columbia.edu/data/).
|
| 148 |
+
|
| 149 |
### OF-Syn Configs
|
| 150 |
- `of-syn`: Fixed randomized 80/10/10 split
|
| 151 |
- `of-syn-cross-age`: Cross-age split (train: adults, test: children/elderly)
|
|
|
|
| 212 |
|
| 213 |
### Loading OF-Syn videos
|
| 214 |
|
| 215 |
+
OF-Syn configs support `include_video=True` to download and include the video files (~9 GB download and disk space).
|
| 216 |
By default, videos are returned as decoded `Video()` objects. Set `decode_video=False` to get file paths instead.
|
| 217 |
|
| 218 |
```python
|
|
|
|
| 230 |
sample = ds["train"][0]
|
| 231 |
print(sample["video"]) # "/path/to/cached/fall/fall_ch_001.mp4"
|
| 232 |
|
| 233 |
+
# Cross-domain with video: train/val (syn) and test (itw) both have videos
|
| 234 |
ds = load_dataset("simplexsigil2/omnifall", "of-syn-itw",
|
| 235 |
+
include_video=True, decode_video=False,
|
| 236 |
+
oops_video_dir="/path/to/oops_prepared",
|
| 237 |
+
trust_remote_code=True)
|
| 238 |
+
print(ds["train"][0]["video"]) # syn video path (auto-downloaded)
|
| 239 |
+
print(ds["test"][0]["video"]) # itw video path (from oops_video_dir)
|
| 240 |
```
|
| 241 |
|
| 242 |
+
### Loading OF-ItW (OOPS) videos
|
| 243 |
+
|
| 244 |
+
OOPS videos are not hosted in this repository due to licensing. To load OF-ItW with videos, first prepare the OOPS videos using the included script:
|
| 245 |
+
|
| 246 |
+
```bash
|
| 247 |
+
# Step 1: Prepare OOPS videos (~45GB streamed from source, ~2.6GB disk space)
|
| 248 |
+
python prepare_oops_videos.py --output_dir /path/to/oops_prepared
|
| 249 |
+
```
|
| 250 |
+
|
| 251 |
+
```python
|
| 252 |
+
# Step 2: Load OF-ItW with videos
|
| 253 |
+
from datasets import load_dataset
|
| 254 |
+
|
| 255 |
+
ds = load_dataset("simplexsigil2/omnifall", "of-itw",
|
| 256 |
+
include_video=True, decode_video=False,
|
| 257 |
+
oops_video_dir="/path/to/oops_prepared",
|
| 258 |
+
trust_remote_code=True)
|
| 259 |
+
sample = ds["train"][0]
|
| 260 |
+
print(sample["video"]) # "/path/to/oops_prepared/falls/BestFailsofWeek2July2016_FailArmy9.mp4"
|
| 261 |
+
```
|
| 262 |
+
|
| 263 |
+
The preparation script streams the full [OOPS dataset](https://oops.cs.columbia.edu/data/) archive (~45GB download) from the original source and extracts only the 818 videos used in OF-ItW. The archive is streamed and never written to disk, so only ~2.6GB of disk space is needed for the extracted videos. If you already have the OOPS archive downloaded locally, pass it with `--oops_archive /path/to/video_and_anns.tar.gz`.
|
| 264 |
+
|
| 265 |
## Label definitions
|
| 266 |
|
| 267 |
In this section we provide additional information about the labelling process to provide as much transparency as possible.
|
data_files/oops_video_mapping.csv
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
omnifall.py
CHANGED
|
@@ -9,6 +9,7 @@ All components share a 16-class activity taxonomy. Staged datasets use classes 0
|
|
| 9 |
while OF-ItW and OF-Syn use the full 0-15 range.
|
| 10 |
"""
|
| 11 |
|
|
|
|
| 12 |
import warnings
|
| 13 |
import pandas as pd
|
| 14 |
import datasets
|
|
@@ -211,9 +212,14 @@ class OmniFallConfig(BuilderConfig):
|
|
| 211 |
test_split_type: For cross-domain configs, overrides split_type for test.
|
| 212 |
paths_only: If True, only return video paths (no label merging).
|
| 213 |
framewise: If True, load frame-wise labels from HDF5 (OF-Syn only).
|
| 214 |
-
include_video: If True, download and include video files
|
|
|
|
|
|
|
| 215 |
decode_video: If True (default), use Video() feature for auto-decoding.
|
| 216 |
If False, return absolute file path as string.
|
|
|
|
|
|
|
|
|
|
| 217 |
deprecated_alias_for: If set, this config is a deprecated alias.
|
| 218 |
"""
|
| 219 |
|
|
@@ -229,6 +235,7 @@ class OmniFallConfig(BuilderConfig):
|
|
| 229 |
framewise=False,
|
| 230 |
include_video=False,
|
| 231 |
decode_video=True,
|
|
|
|
| 232 |
deprecated_alias_for=None,
|
| 233 |
**kwargs,
|
| 234 |
):
|
|
@@ -243,6 +250,7 @@ class OmniFallConfig(BuilderConfig):
|
|
| 243 |
self.framewise = framewise
|
| 244 |
self.include_video = include_video
|
| 245 |
self.decode_video = decode_video
|
|
|
|
| 246 |
self.deprecated_alias_for = deprecated_alias_for
|
| 247 |
|
| 248 |
@property
|
|
@@ -439,6 +447,7 @@ for _old_name, _new_name in _DEPRECATED_ALIASES.items():
|
|
| 439 |
framewise=_target.framewise,
|
| 440 |
include_video=_target.include_video,
|
| 441 |
decode_video=_target.decode_video,
|
|
|
|
| 442 |
deprecated_alias_for=_new_name,
|
| 443 |
)
|
| 444 |
)
|
|
@@ -587,6 +596,25 @@ class OmniFall(GeneratorBasedBuilder):
|
|
| 587 |
"""Return list of split CSV paths for all 8 staged datasets."""
|
| 588 |
return [f"splits/{split_type}/{ds}/{split_name}.csv" for ds in _STAGED_DATASETS]
|
| 589 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 590 |
def _make_split_merge_generators(self, split_files_per_split, label_files,
|
| 591 |
dl_manager, video_dir=None):
|
| 592 |
"""Helper to create train/val/test SplitGenerators for split_merge mode.
|
|
@@ -627,10 +655,12 @@ class OmniFall(GeneratorBasedBuilder):
|
|
| 627 |
def _itw_splits(self, cfg, dl_manager):
|
| 628 |
"""OF-ItW: OOPS-Fall (CS=CV identical)."""
|
| 629 |
st = cfg.split_type
|
|
|
|
| 630 |
return self._make_split_merge_generators(
|
| 631 |
{sn: [f"splits/{st}/OOPS/{sn}.csv"] for sn in ("train", "val", "test")},
|
| 632 |
[_ITW_LABEL_FILE],
|
| 633 |
dl_manager,
|
|
|
|
| 634 |
)
|
| 635 |
|
| 636 |
def _aggregate_splits(self, cfg, dl_manager):
|
|
@@ -731,10 +761,14 @@ class OmniFall(GeneratorBasedBuilder):
|
|
| 731 |
train_st = cfg.split_type
|
| 732 |
test_st = cfg.test_split_type or "cs"
|
| 733 |
|
| 734 |
-
#
|
| 735 |
-
|
| 736 |
if cfg.include_video and cfg.train_source == "syn":
|
| 737 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
| 738 |
|
| 739 |
# Determine train/val files and labels
|
| 740 |
if cfg.train_source == "staged":
|
|
@@ -771,7 +805,7 @@ class OmniFall(GeneratorBasedBuilder):
|
|
| 771 |
"mode": "split_merge",
|
| 772 |
"split_files": dl_manager.download(train_split_files["train"]),
|
| 773 |
"label_files": resolved_train_labels,
|
| 774 |
-
"video_dir":
|
| 775 |
},
|
| 776 |
),
|
| 777 |
SplitGenerator(
|
|
@@ -780,7 +814,7 @@ class OmniFall(GeneratorBasedBuilder):
|
|
| 780 |
"mode": "split_merge",
|
| 781 |
"split_files": dl_manager.download(train_split_files["val"]),
|
| 782 |
"label_files": resolved_train_labels,
|
| 783 |
-
"video_dir":
|
| 784 |
},
|
| 785 |
),
|
| 786 |
SplitGenerator(
|
|
@@ -789,7 +823,7 @@ class OmniFall(GeneratorBasedBuilder):
|
|
| 789 |
"mode": "split_merge",
|
| 790 |
"split_files": resolved_test_splits,
|
| 791 |
"label_files": resolved_test_labels,
|
| 792 |
-
"video_dir":
|
| 793 |
},
|
| 794 |
),
|
| 795 |
]
|
|
@@ -828,8 +862,6 @@ class OmniFall(GeneratorBasedBuilder):
|
|
| 828 |
|
| 829 |
def _gen_split_merge(self, split_files, label_files, video_dir=None):
|
| 830 |
"""Load split paths, merge with labels, yield examples."""
|
| 831 |
-
import os
|
| 832 |
-
|
| 833 |
split_dfs = [pd.read_csv(sf) for sf in split_files]
|
| 834 |
split_df = pd.concat(split_dfs, ignore_index=True)
|
| 835 |
|
|
|
|
| 9 |
while OF-ItW and OF-Syn use the full 0-15 range.
|
| 10 |
"""
|
| 11 |
|
| 12 |
+
import os
|
| 13 |
import warnings
|
| 14 |
import pandas as pd
|
| 15 |
import datasets
|
|
|
|
| 212 |
test_split_type: For cross-domain configs, overrides split_type for test.
|
| 213 |
paths_only: If True, only return video paths (no label merging).
|
| 214 |
framewise: If True, load frame-wise labels from HDF5 (OF-Syn only).
|
| 215 |
+
include_video: If True, download and include video files.
|
| 216 |
+
For OF-Syn configs, videos are downloaded from the HF repo.
|
| 217 |
+
For OF-ItW configs, requires oops_video_dir to be set.
|
| 218 |
decode_video: If True (default), use Video() feature for auto-decoding.
|
| 219 |
If False, return absolute file path as string.
|
| 220 |
+
oops_video_dir: Path to directory containing prepared OOPS videos
|
| 221 |
+
(produced by prepare_oops_videos.py). Required when loading
|
| 222 |
+
OF-ItW configs with include_video=True.
|
| 223 |
deprecated_alias_for: If set, this config is a deprecated alias.
|
| 224 |
"""
|
| 225 |
|
|
|
|
| 235 |
framewise=False,
|
| 236 |
include_video=False,
|
| 237 |
decode_video=True,
|
| 238 |
+
oops_video_dir=None,
|
| 239 |
deprecated_alias_for=None,
|
| 240 |
**kwargs,
|
| 241 |
):
|
|
|
|
| 250 |
self.framewise = framewise
|
| 251 |
self.include_video = include_video
|
| 252 |
self.decode_video = decode_video
|
| 253 |
+
self.oops_video_dir = oops_video_dir
|
| 254 |
self.deprecated_alias_for = deprecated_alias_for
|
| 255 |
|
| 256 |
@property
|
|
|
|
| 447 |
framewise=_target.framewise,
|
| 448 |
include_video=_target.include_video,
|
| 449 |
decode_video=_target.decode_video,
|
| 450 |
+
oops_video_dir=_target.oops_video_dir,
|
| 451 |
deprecated_alias_for=_new_name,
|
| 452 |
)
|
| 453 |
)
|
|
|
|
| 596 |
"""Return list of split CSV paths for all 8 staged datasets."""
|
| 597 |
return [f"splits/{split_type}/{ds}/{split_name}.csv" for ds in _STAGED_DATASETS]
|
| 598 |
|
| 599 |
+
def _resolve_oops_video_dir(self, cfg):
|
| 600 |
+
"""Resolve the OOPS video directory for OF-ItW configs."""
|
| 601 |
+
if not cfg.include_video:
|
| 602 |
+
return None
|
| 603 |
+
if not cfg.oops_video_dir:
|
| 604 |
+
raise ValueError(
|
| 605 |
+
"OF-ItW video loading requires oops_video_dir. "
|
| 606 |
+
"Run prepare_oops_videos.py first, then pass the output path:\n"
|
| 607 |
+
" load_dataset(..., include_video=True, "
|
| 608 |
+
'oops_video_dir="/path/to/oops_prepared")'
|
| 609 |
+
)
|
| 610 |
+
video_dir = os.path.abspath(cfg.oops_video_dir)
|
| 611 |
+
if not os.path.isdir(video_dir):
|
| 612 |
+
raise FileNotFoundError(
|
| 613 |
+
f"oops_video_dir does not exist: {video_dir}\n"
|
| 614 |
+
"Run prepare_oops_videos.py to prepare OOPS videos first."
|
| 615 |
+
)
|
| 616 |
+
return video_dir
|
| 617 |
+
|
| 618 |
def _make_split_merge_generators(self, split_files_per_split, label_files,
|
| 619 |
dl_manager, video_dir=None):
|
| 620 |
"""Helper to create train/val/test SplitGenerators for split_merge mode.
|
|
|
|
| 655 |
def _itw_splits(self, cfg, dl_manager):
|
| 656 |
"""OF-ItW: OOPS-Fall (CS=CV identical)."""
|
| 657 |
st = cfg.split_type
|
| 658 |
+
video_dir = self._resolve_oops_video_dir(cfg)
|
| 659 |
return self._make_split_merge_generators(
|
| 660 |
{sn: [f"splits/{st}/OOPS/{sn}.csv"] for sn in ("train", "val", "test")},
|
| 661 |
[_ITW_LABEL_FILE],
|
| 662 |
dl_manager,
|
| 663 |
+
video_dir=video_dir,
|
| 664 |
)
|
| 665 |
|
| 666 |
def _aggregate_splits(self, cfg, dl_manager):
|
|
|
|
| 761 |
train_st = cfg.split_type
|
| 762 |
test_st = cfg.test_split_type or "cs"
|
| 763 |
|
| 764 |
+
# Resolve video directories for each source
|
| 765 |
+
train_video_dir = None
|
| 766 |
if cfg.include_video and cfg.train_source == "syn":
|
| 767 |
+
train_video_dir = dl_manager.download_and_extract(_SYN_VIDEO_ARCHIVE)
|
| 768 |
+
|
| 769 |
+
test_video_dir = None
|
| 770 |
+
if cfg.include_video and cfg.test_source == "itw":
|
| 771 |
+
test_video_dir = self._resolve_oops_video_dir(cfg)
|
| 772 |
|
| 773 |
# Determine train/val files and labels
|
| 774 |
if cfg.train_source == "staged":
|
|
|
|
| 805 |
"mode": "split_merge",
|
| 806 |
"split_files": dl_manager.download(train_split_files["train"]),
|
| 807 |
"label_files": resolved_train_labels,
|
| 808 |
+
"video_dir": train_video_dir,
|
| 809 |
},
|
| 810 |
),
|
| 811 |
SplitGenerator(
|
|
|
|
| 814 |
"mode": "split_merge",
|
| 815 |
"split_files": dl_manager.download(train_split_files["val"]),
|
| 816 |
"label_files": resolved_train_labels,
|
| 817 |
+
"video_dir": train_video_dir,
|
| 818 |
},
|
| 819 |
),
|
| 820 |
SplitGenerator(
|
|
|
|
| 823 |
"mode": "split_merge",
|
| 824 |
"split_files": resolved_test_splits,
|
| 825 |
"label_files": resolved_test_labels,
|
| 826 |
+
"video_dir": test_video_dir,
|
| 827 |
},
|
| 828 |
),
|
| 829 |
]
|
|
|
|
| 862 |
|
| 863 |
def _gen_split_merge(self, split_files, label_files, video_dir=None):
|
| 864 |
"""Load split paths, merge with labels, yield examples."""
|
|
|
|
|
|
|
| 865 |
split_dfs = [pd.read_csv(sf) for sf in split_files]
|
| 866 |
split_df = pd.concat(split_dfs, ignore_index=True)
|
| 867 |
|
prepare_oops_videos.py
ADDED
|
@@ -0,0 +1,177 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
#!/usr/bin/env python3
|
| 2 |
+
"""Prepare OOPS videos for OF-ItW (OmniFall In-the-Wild).
|
| 3 |
+
|
| 4 |
+
Streams the OOPS dataset archive and extracts only the 818 videos used in
|
| 5 |
+
OF-ItW, renamed to match the OF-ItW path convention. By default, the archive
|
| 6 |
+
is streamed directly from the OOPS website (~45GB) without writing it to disk.
|
| 7 |
+
Only the output videos (~2.6GB) are saved.
|
| 8 |
+
|
| 9 |
+
Usage:
|
| 10 |
+
# Stream from the web (no local archive needed):
|
| 11 |
+
python prepare_oops_videos.py --output_dir /path/to/oops_prepared
|
| 12 |
+
|
| 13 |
+
# Use an already-downloaded archive:
|
| 14 |
+
python prepare_oops_videos.py --output_dir /path/to/oops_prepared \
|
| 15 |
+
--oops_archive /path/to/video_and_anns.tar.gz
|
| 16 |
+
|
| 17 |
+
# Then load with the dataset builder:
|
| 18 |
+
ds = load_dataset("simplexsigil2/omnifall", "of-itw",
|
| 19 |
+
include_video=True,
|
| 20 |
+
oops_video_dir="/path/to/oops_prepared",
|
| 21 |
+
trust_remote_code=True)
|
| 22 |
+
"""
|
| 23 |
+
|
| 24 |
+
import argparse
|
| 25 |
+
import csv
|
| 26 |
+
import os
|
| 27 |
+
import subprocess
|
| 28 |
+
import tarfile
|
| 29 |
+
|
| 30 |
+
OOPS_URL = "https://oops.cs.columbia.edu/data/video_and_anns.tar.gz"
|
| 31 |
+
SCRIPT_DIR = os.path.dirname(os.path.abspath(__file__))
|
| 32 |
+
MAPPING_FILE = os.path.join(SCRIPT_DIR, "data_files", "oops_video_mapping.csv")
|
| 33 |
+
|
| 34 |
+
|
| 35 |
+
def load_mapping():
|
| 36 |
+
"""Load the OOPS-to-ITW filename mapping from the repo."""
|
| 37 |
+
if not os.path.exists(MAPPING_FILE):
|
| 38 |
+
raise FileNotFoundError(
|
| 39 |
+
f"Mapping file not found: {MAPPING_FILE}\n"
|
| 40 |
+
"Make sure you run this script from the OmniFall dataset directory."
|
| 41 |
+
)
|
| 42 |
+
mapping = {}
|
| 43 |
+
with open(MAPPING_FILE) as f:
|
| 44 |
+
reader = csv.DictReader(f)
|
| 45 |
+
for row in reader:
|
| 46 |
+
mapping[row["oops_path"]] = row["itw_path"]
|
| 47 |
+
return mapping
|
| 48 |
+
|
| 49 |
+
|
| 50 |
+
def _to_stdout_cmd(source, member):
|
| 51 |
+
"""Build command to extract a single tar member to stdout."""
|
| 52 |
+
if source.startswith("http://") or source.startswith("https://"):
|
| 53 |
+
return f'curl -sL "{source}" | tar -xzf - --to-stdout "{member}"', True
|
| 54 |
+
elif source.endswith(".tar.gz") or source.endswith(".tgz"):
|
| 55 |
+
return ["tar", "-xzf", source, "--to-stdout", member], False
|
| 56 |
+
else:
|
| 57 |
+
return ["tar", "-xf", source, "--to-stdout", member], False
|
| 58 |
+
|
| 59 |
+
|
| 60 |
+
def extract_videos(source, mapping, output_dir):
|
| 61 |
+
"""Stream through the OOPS archive and extract matching videos.
|
| 62 |
+
|
| 63 |
+
The archive has a nested structure: the outer tar contains
|
| 64 |
+
oops_dataset/video.tar.gz, which contains the actual video files.
|
| 65 |
+
We pipe the inner tar.gz to stdout and selectively extract only
|
| 66 |
+
the 818 videos in our mapping.
|
| 67 |
+
"""
|
| 68 |
+
total = len(mapping)
|
| 69 |
+
print(f"Extracting {total} videos from OOPS archive...")
|
| 70 |
+
if source.startswith("http"):
|
| 71 |
+
print("(Streaming ~45GB from web, no local disk space needed)")
|
| 72 |
+
print("(This may take 30-60 minutes depending on connection speed)")
|
| 73 |
+
else:
|
| 74 |
+
print("(Reading from local archive)")
|
| 75 |
+
|
| 76 |
+
os.makedirs(os.path.join(output_dir, "falls"), exist_ok=True)
|
| 77 |
+
|
| 78 |
+
found = 0
|
| 79 |
+
remaining = set(mapping.keys())
|
| 80 |
+
|
| 81 |
+
cmd, use_shell = _to_stdout_cmd(source, "oops_dataset/video.tar.gz")
|
| 82 |
+
proc = subprocess.Popen(
|
| 83 |
+
cmd, shell=use_shell, stdout=subprocess.PIPE, stderr=subprocess.PIPE,
|
| 84 |
+
)
|
| 85 |
+
|
| 86 |
+
try:
|
| 87 |
+
with tarfile.open(fileobj=proc.stdout, mode="r|gz") as tar:
|
| 88 |
+
for member in tar:
|
| 89 |
+
if not remaining:
|
| 90 |
+
break
|
| 91 |
+
if member.name in remaining:
|
| 92 |
+
itw_path = mapping[member.name]
|
| 93 |
+
out_path = os.path.join(output_dir, itw_path)
|
| 94 |
+
|
| 95 |
+
f = tar.extractfile(member)
|
| 96 |
+
if f is not None:
|
| 97 |
+
with open(out_path, "wb") as out_f:
|
| 98 |
+
while True:
|
| 99 |
+
chunk = f.read(1024 * 1024)
|
| 100 |
+
if not chunk:
|
| 101 |
+
break
|
| 102 |
+
out_f.write(chunk)
|
| 103 |
+
f.close()
|
| 104 |
+
found += 1
|
| 105 |
+
remaining.discard(member.name)
|
| 106 |
+
if found % 50 == 0:
|
| 107 |
+
print(f" Extracted {found}/{total} videos...")
|
| 108 |
+
finally:
|
| 109 |
+
proc.stdout.close()
|
| 110 |
+
proc.wait()
|
| 111 |
+
|
| 112 |
+
print(f"Extracted {found}/{total} videos.")
|
| 113 |
+
if remaining:
|
| 114 |
+
print(f"WARNING: {len(remaining)} videos not found in archive:")
|
| 115 |
+
for p in sorted(remaining)[:10]:
|
| 116 |
+
print(f" {p}")
|
| 117 |
+
if len(remaining) > 10:
|
| 118 |
+
print(f" ... and {len(remaining) - 10} more")
|
| 119 |
+
return found
|
| 120 |
+
|
| 121 |
+
|
| 122 |
+
def main():
|
| 123 |
+
parser = argparse.ArgumentParser(
|
| 124 |
+
description="Prepare OOPS videos for OF-ItW.",
|
| 125 |
+
formatter_class=argparse.RawDescriptionHelpFormatter,
|
| 126 |
+
epilog=__doc__,
|
| 127 |
+
)
|
| 128 |
+
parser.add_argument(
|
| 129 |
+
"--output_dir", required=True,
|
| 130 |
+
help="Directory to place the prepared videos (will contain falls/*.mp4).",
|
| 131 |
+
)
|
| 132 |
+
parser.add_argument(
|
| 133 |
+
"--oops_archive", default=None,
|
| 134 |
+
help="Path to already-downloaded video_and_anns.tar.gz (or .tar). "
|
| 135 |
+
"If not provided, streams directly from the OOPS website.",
|
| 136 |
+
)
|
| 137 |
+
args = parser.parse_args()
|
| 138 |
+
|
| 139 |
+
output_dir = os.path.abspath(args.output_dir)
|
| 140 |
+
os.makedirs(output_dir, exist_ok=True)
|
| 141 |
+
|
| 142 |
+
# Load the pre-computed mapping from the repo
|
| 143 |
+
print("Loading OOPS-to-ITW video mapping...")
|
| 144 |
+
mapping = load_mapping()
|
| 145 |
+
print(f" {len(mapping)} videos to extract.")
|
| 146 |
+
|
| 147 |
+
# Determine source
|
| 148 |
+
if args.oops_archive:
|
| 149 |
+
source = os.path.abspath(args.oops_archive)
|
| 150 |
+
if not os.path.exists(source):
|
| 151 |
+
raise FileNotFoundError(f"Archive not found: {source}")
|
| 152 |
+
print(f"Source: {source}")
|
| 153 |
+
else:
|
| 154 |
+
source = OOPS_URL
|
| 155 |
+
print(f"Source: {source}")
|
| 156 |
+
|
| 157 |
+
# Single streaming pass
|
| 158 |
+
found = extract_videos(source, mapping, output_dir)
|
| 159 |
+
|
| 160 |
+
# Summary
|
| 161 |
+
print()
|
| 162 |
+
print("=" * 60)
|
| 163 |
+
print("Preparation complete!")
|
| 164 |
+
print(f" Output directory: {output_dir}")
|
| 165 |
+
print(f" Videos extracted: {found}/{len(mapping)}")
|
| 166 |
+
print()
|
| 167 |
+
print("To load OF-ItW with videos:")
|
| 168 |
+
print()
|
| 169 |
+
print(" from datasets import load_dataset")
|
| 170 |
+
print(f' ds = load_dataset("simplexsigil2/omnifall", "of-itw",')
|
| 171 |
+
print(f' include_video=True,')
|
| 172 |
+
print(f' oops_video_dir="{output_dir}",')
|
| 173 |
+
print(f' trust_remote_code=True)')
|
| 174 |
+
|
| 175 |
+
|
| 176 |
+
if __name__ == "__main__":
|
| 177 |
+
main()
|