diff --git a/.gitattributes b/.gitattributes index 1ef325f1b111266a6b26e0196871bd78baa8c2f3..952b04c545bbc16173a2ba1a3ca2c2bd52b9f73b 100644 --- a/.gitattributes +++ b/.gitattributes @@ -57,3 +57,5 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text # Video files - compressed *.mp4 filter=lfs diff=lfs merge=lfs -text *.webm filter=lfs diff=lfs merge=lfs -text +*.ipynb filter=lfs diff=lfs merge=lfs -text +omnifall_dataset_examples.ipynb filter=lfs diff=lfs merge=lfs -text diff --git a/.gitignore b/.gitignore index 37ffaffa736398173f5828a382d7b678aec9a0c5..9ff8146538ebb90b8b3f45bcaf1fec65ad23a8fc 100644 --- a/.gitignore +++ b/.gitignore @@ -1,5 +1,3 @@ convert_oops_via_to_csv.py -# Symlink for local testing (HF derives dataset name from directory name) .claude -hf.py __pycache__ diff --git a/README.md b/README.md index 126edc5b9b41e1b9ebc6a21752534218937fc10d..488558cf8bbed415777dddf1ea0fefce66f4ea18 100644 --- a/README.md +++ b/README.md @@ -12,6 +12,1985 @@ tags: pretty_name: 'OmniFall: A Unified Benchmark for Staged-to-Wild Fall Detection' size_categories: - 10K @@ -64,7 +2043,8 @@ Also have a look for additional information on our project page: The repository is organized as follows: -- `omnifall.py` - Custom HuggingFace dataset builder (handles all configs) +- `omnifall_builder.py` - Dataset builder (reference, not used by HF directly) +- `parquet/` - Pre-built parquet files for all configs (used by `load_dataset`) - `labels/` - CSV files containing temporal segment annotations - Staged/OOPS labels: 7 columns (`path, label, start, end, subject, cam, dataset`) - OF-Syn labels: 19 columns (7 core + 12 demographic/scene metadata) @@ -129,13 +2109,13 @@ path/to/clip ## Evaluation Protocols -All configurations are defined in the `omnifall.py` dataset builder and loaded via `load_dataset("simplexsigil2/omnifall", "")`. +All configurations are loaded via `load_dataset("simplexsigil2/omnifall", "")`. ### Labels (no train/val/test splits) - `labels` (default): All staged + OOPS labels (52k segments, 7 columns) - `labels-syn`: OF-Syn labels with demographic metadata (19k segments, 19 columns) - `metadata-syn`: OF-Syn video-level metadata (12k videos) -- `framewise-syn`: OF-Syn frame-wise HDF5 labels (81 labels per video) +- `framewise-syn`: OF-Syn frame-wise HDF5 labels (81 labels per video). **Requires the `omnifall` package (coming soon).** ### OF-Staged Configs - `of-sta-cs`: 8 staged datasets, cross-subject splits @@ -144,7 +2124,7 @@ All configurations are defined in the `omnifall.py` dataset builder and loaded v ### OF-ItW Config - `of-itw`: OOPS-Fall in-the-wild genuine accidents -OF-ItW supports optional video loading via `include_video=True` with `oops_video_dir` (see examples below). Videos are not hosted here due to licensing; run `prepare_oops_videos.py` to download them from the [original OOPS source](https://oops.cs.columbia.edu/data/). +Video loading requires the `omnifall` package (coming soon). See examples below. ### OF-Syn Configs - `of-syn`: Fixed randomized 80/10/10 split @@ -152,7 +2132,7 @@ OF-ItW supports optional video loading via `include_video=True` with `oops_video - `of-syn-cross-ethnicity`: Cross-ethnicity split - `of-syn-cross-bmi`: Cross-BMI split (train: normal/underweight, test: obese) -All OF-Syn configs support optional video loading via `include_video=True` (see examples below). +Video loading for OF-Syn configs requires the `omnifall` package (coming soon). ### Cross-Domain Evaluation - `of-sta-itw-cs`: Train/val on staged CS, test on OOPS @@ -180,7 +2160,7 @@ The following old config names still work but emit a deprecation warning: ## Examples -For a complete interactive walkthrough of all configs, video loading, and label visualization, see the [example notebook](test_omnifall_dataset.ipynb). +For a complete interactive walkthrough of all configs, video loading, and label visualization, see the [example notebook](omnifall_dataset_examples.ipynb). ```python from datasets import load_dataset @@ -210,57 +2190,17 @@ labels = load_dataset("simplexsigil2/omnifall", "labels")["train"] syn_labels = load_dataset("simplexsigil2/omnifall", "labels-syn")["train"] ``` -### Loading OF-Syn videos +### Loading Videos -OF-Syn configs support `include_video=True` to download and include the video files (~9 GB download and disk space). -By default, videos are returned as decoded `Video()` objects. Set `decode_video=False` to get file paths instead. +Video loading (OF-Syn, OF-ItW, and cross-domain configs) requires the `omnifall` Python package, which will be available on PyPI soon. The package handles video download, caching, and integration with HuggingFace datasets. -```python -from datasets import load_dataset - -# Load with decoded video (HF Video() feature) -ds = load_dataset("simplexsigil2/omnifall", "of-syn", - include_video=True, trust_remote_code=True) -sample = ds["train"][0] -print(sample["video"]) # VideoReader object - -# Load with file paths only (faster, for custom decoding) -ds = load_dataset("simplexsigil2/omnifall", "of-syn", - include_video=True, decode_video=False, trust_remote_code=True) -sample = ds["train"][0] -print(sample["video"]) # "/path/to/cached/fall/fall_ch_001.mp4" - -# Cross-domain with video: train/val (syn) and test (itw) both have videos -ds = load_dataset("simplexsigil2/omnifall", "of-syn-itw", - include_video=True, decode_video=False, - oops_video_dir="/path/to/oops_prepared", - trust_remote_code=True) -print(ds["train"][0]["video"]) # syn video path (auto-downloaded) -print(ds["test"][0]["video"]) # itw video path (from oops_video_dir) -``` - -### Loading OF-ItW (OOPS) videos - -OOPS videos are not hosted in this repository due to licensing. To load OF-ItW with videos, first prepare the OOPS videos using the included script: +For OOPS videos specifically, you can prepare them manually using the included script: ```bash -# Step 1: Prepare OOPS videos (~45GB streamed from source, ~2.6GB disk space) python prepare_oops_videos.py --output_dir /path/to/oops_prepared ``` -```python -# Step 2: Load OF-ItW with videos -from datasets import load_dataset - -ds = load_dataset("simplexsigil2/omnifall", "of-itw", - include_video=True, decode_video=False, - oops_video_dir="/path/to/oops_prepared", - trust_remote_code=True) -sample = ds["train"][0] -print(sample["video"]) # "/path/to/oops_prepared/falls/BestFailsofWeek2July2016_FailArmy9.mp4" -``` - -The preparation script streams the full [OOPS dataset](https://oops.cs.columbia.edu/data/) archive (~45GB download) from the original source and extracts only the 818 videos used in OF-ItW. The archive is streamed and never written to disk, so only ~2.6GB of disk space is needed for the extracted videos. If you already have the OOPS archive downloaded locally, pass it with `--oops_archive /path/to/video_and_anns.tar.gz`. +The preparation streams the full OOPS archive from the original source and extracts only the 818 videos used in OF-ItW. The archive is streamed and never written to disk, so only ~2.6GB of disk space is needed. If you already have the OOPS archive downloaded locally, pass it with `--oops_archive /path/to/video_and_anns.tar.gz`. ## Label definitions diff --git a/generate_parquet.py b/generate_parquet.py new file mode 100644 index 0000000000000000000000000000000000000000..1c124059558cb2eeffd369a4ab06af4cec7e0d48 --- /dev/null +++ b/generate_parquet.py @@ -0,0 +1,428 @@ +"""One-time script to generate parquet files for all OmniFall HF configs. + +Created for the HF datasets 4.6 migration (dataset scripts no longer supported). +Generates parquet files that enable native load_dataset() without custom builder code. +Can be safely deleted after parquet files are committed to the Hub. + +Usage: + python generate_parquet.py +""" + +import os +from pathlib import Path + +import numpy as np +import pandas as pd + +REPO_ROOT = Path(__file__).parent +PARQUET_DIR = REPO_ROOT / "parquet" + +# ---- Label and split file paths ---- + +STAGED_DATASETS = [ + "caucafall", "cmdfall", "edf", "gmdcsa24", + "le2i", "mcfd", "occu", "up_fall", +] + +# Label CSV filenames (note: GMDCSA24 has capitalized filename) +STAGED_LABEL_FILES = { + "caucafall": "labels/caucafall.csv", + "cmdfall": "labels/cmdfall.csv", + "edf": "labels/edf.csv", + "gmdcsa24": "labels/GMDCSA24.csv", + "le2i": "labels/le2i.csv", + "mcfd": "labels/mcfd.csv", + "occu": "labels/occu.csv", + "up_fall": "labels/up_fall.csv", +} +ITW_LABEL_FILE = "labels/OOPS.csv" +SYN_LABEL_FILE = "labels/of-syn.csv" +METADATA_FILE = "videos/metadata.csv" + +CORE_COLUMNS = ["path", "label", "start", "end", "subject", "cam", "dataset"] +DEMOGRAPHIC_COLUMNS = [ + "age_group", "gender_presentation", "monk_skin_tone", + "race_ethnicity_omb", "bmi_band", "height_band", + "environment_category", "camera_shot", "speed", + "camera_elevation", "camera_azimuth", "camera_distance", +] +SYN_COLUMNS = CORE_COLUMNS + DEMOGRAPHIC_COLUMNS +METADATA_COLUMNS = ["path", "dataset"] + DEMOGRAPHIC_COLUMNS + +# ---- Deprecated aliases ---- + +DEPRECATED_ALIASES = { + "cs-staged": "of-sta-cs", + "cv-staged": "of-sta-cv", + "cs-staged-wild": "of-sta-itw-cs", + "cv-staged-wild": "of-sta-itw-cv", + "OOPS": "of-itw", +} + + +# ---- Helpers ---- + +def load_csv(relpath): + """Load a CSV file relative to REPO_ROOT.""" + return pd.read_csv(REPO_ROOT / relpath) + + +def load_staged_labels(datasets=None): + """Load and concatenate staged label CSVs.""" + if datasets is None: + datasets = STAGED_DATASETS + dfs = [load_csv(STAGED_LABEL_FILES[ds]) for ds in datasets] + return pd.concat(dfs, ignore_index=True) + + +def load_itw_labels(): + """Load OOPS/ItW labels.""" + return load_csv(ITW_LABEL_FILE) + + +def load_syn_labels(): + """Load OF-Syn labels (19-col).""" + return load_csv(SYN_LABEL_FILE) + + +def staged_split_files(split_type, split_name): + """Return list of split CSV relative paths for all 8 staged datasets.""" + return [f"splits/{split_type}/{ds}/{split_name}.csv" for ds in STAGED_DATASETS] + + +def merge_split_labels(split_files, labels_df): + """Merge split paths with labels, replicating _gen_split_merge logic.""" + split_dfs = [load_csv(sf) for sf in split_files] + split_df = pd.concat(split_dfs, ignore_index=True) + merged = pd.merge(split_df, labels_df, on="path", how="left") + # Drop rows where the path didn't match any label (orphaned split entries) + unmatched = merged["label"].isna() + if unmatched.any(): + n = unmatched.sum() + paths = merged.loc[unmatched, "path"].tolist() + print(f" WARNING: Dropping {n} unmatched path(s): {paths}") + merged = merged[~unmatched].reset_index(drop=True) + return merged + + +def cast_core_dtypes(df): + """Cast core columns to correct dtypes for parquet/ClassLabel.""" + df = df.copy() + df["path"] = df["path"].astype(str) + df["label"] = df["label"].astype(int) + df["start"] = df["start"].astype(np.float32) + df["end"] = df["end"].astype(np.float32) + df["subject"] = df["subject"].astype(np.int32) + df["cam"] = df["cam"].astype(np.int32) + df["dataset"] = df["dataset"].astype(str) + return df + + +def cast_demographic_dtypes(df): + """Cast demographic columns to string (for ClassLabel encoding).""" + df = df.copy() + for col in DEMOGRAPHIC_COLUMNS: + if col in df.columns: + df[col] = df[col].astype(str) + return df + + +def select_and_cast(df, columns, schema="core"): + """Select columns and cast dtypes.""" + df = df[columns].copy() + if schema in ("core", "syn"): + df = cast_core_dtypes(df) + if schema in ("syn", "metadata"): + df = cast_demographic_dtypes(df) + return df + + +def write_parquet(df, config_name, split_name): + """Write a dataframe as a parquet file in the expected layout. + + Returns the output path, or None if the dataframe is empty (Arrow can't + handle 0-row parquet files). + """ + if len(df) == 0: + print(f" SKIP {config_name}/{split_name}: 0 rows (not written)") + return None + out_dir = PARQUET_DIR / config_name + out_dir.mkdir(parents=True, exist_ok=True) + out_path = out_dir / f"{split_name}-00000-of-00001.parquet" + df.to_parquet(out_path, index=False) + return out_path + + +def generate_split_config(config_name, split_type, split_files_fn, labels_df, columns, + schema="core"): + """Generate train/val/test parquet files for a split-based config.""" + results = {} + for split_name, csv_name in [("train", "train"), ("validation", "val"), ("test", "test")]: + sf = split_files_fn(split_type, csv_name) + merged = merge_split_labels(sf, labels_df) + df = select_and_cast(merged, columns, schema) + path = write_parquet(df, config_name, split_name) + results[split_name] = len(df) + return results + + +def copy_parquet(source_config, target_config): + """Copy parquet files from source config to target config (for deprecated aliases).""" + src_dir = PARQUET_DIR / source_config + dst_dir = PARQUET_DIR / target_config + dst_dir.mkdir(parents=True, exist_ok=True) + results = {} + for src_file in sorted(src_dir.glob("*.parquet")): + dst_file = dst_dir / src_file.name + # Read and re-write to avoid symlink issues with git + df = pd.read_parquet(src_file) + df.to_parquet(dst_file, index=False) + split_name = src_file.stem.split("-")[0] + results[split_name] = len(df) + return results + + +# ---- Config generators ---- + +def gen_labels(): + """Config: labels - All staged + OOPS labels, single train split.""" + staged = load_staged_labels() + itw = load_itw_labels() + df = pd.concat([staged, itw], ignore_index=True) + df = select_and_cast(df, CORE_COLUMNS, "core") + path = write_parquet(df, "labels", "train") + return {"labels": {"train": len(df)}} + + +def gen_labels_syn(): + """Config: labels-syn - OF-Syn labels with demographics, single train split.""" + df = load_syn_labels() + df = select_and_cast(df, SYN_COLUMNS, "syn") + path = write_parquet(df, "labels-syn", "train") + return {"labels-syn": {"train": len(df)}} + + +def gen_metadata_syn(): + """Config: metadata-syn - OF-Syn video-level metadata, single train split.""" + df = load_csv(METADATA_FILE) + # Select only the metadata columns (drop prompt_id) + metadata_cols = ["path"] + DEMOGRAPHIC_COLUMNS + available = [c for c in metadata_cols if c in df.columns] + df = df[available].drop_duplicates(subset=["path"]).reset_index(drop=True) + df["dataset"] = "of-syn" + df = select_and_cast(df, METADATA_COLUMNS, "metadata") + path = write_parquet(df, "metadata-syn", "train") + return {"metadata-syn": {"train": len(df)}} + + +def gen_of_sta(split_type): + """Config: of-sta-cs / of-sta-cv - 8 staged datasets combined.""" + config_name = f"of-sta-{split_type}" + labels = load_staged_labels() + results = generate_split_config( + config_name, split_type, + lambda st, sn: staged_split_files(st, sn), + labels, CORE_COLUMNS, "core", + ) + return {config_name: results} + + +def gen_of_itw(): + """Config: of-itw - OOPS-Fall in-the-wild.""" + labels = load_itw_labels() + results = {} + for split_name, csv_name in [("train", "train"), ("validation", "val"), ("test", "test")]: + sf = [f"splits/cs/OOPS/{csv_name}.csv"] + merged = merge_split_labels(sf, labels) + df = select_and_cast(merged, CORE_COLUMNS, "core") + write_parquet(df, "of-itw", split_name) + results[split_name] = len(df) + return {"of-itw": results} + + +def gen_of_syn(split_type, config_name): + """Config: of-syn variants.""" + labels = load_syn_labels() + results = {} + for split_name, csv_name in [("train", "train"), ("validation", "val"), ("test", "test")]: + sf = [f"splits/syn/{split_type}/{csv_name}.csv"] + merged = merge_split_labels(sf, labels) + df = select_and_cast(merged, SYN_COLUMNS, "syn") + write_parquet(df, config_name, split_name) + results[split_name] = len(df) + return {config_name: results} + + +def gen_crossdomain(config_name, train_split_type, train_source, test_split_type, + test_source): + """Config: cross-domain configs (train from one source, test from another).""" + # Load labels for train and test sources + if train_source == "staged": + train_labels = load_staged_labels() + train_split_fn = lambda sn: staged_split_files(train_split_type, sn) + elif train_source == "syn": + train_labels = load_syn_labels() + train_split_fn = lambda sn: [f"splits/syn/{train_split_type}/{sn}.csv"] + else: + raise ValueError(f"Unknown train_source: {train_source}") + + if test_source == "itw": + test_labels = load_itw_labels() + test_split_fn = lambda sn: [f"splits/{test_split_type}/OOPS/{sn}.csv"] + else: + raise ValueError(f"Unknown test_source: {test_source}") + + results = {} + + # Train and val come from train source + for split_name, csv_name in [("train", "train"), ("validation", "val")]: + sf = train_split_fn(csv_name) + merged = merge_split_labels(sf, train_labels) + # Cross-domain always uses core 7-col schema + df = select_and_cast(merged, CORE_COLUMNS, "core") + write_parquet(df, config_name, split_name) + results[split_name] = len(df) + + # Test comes from test source + sf = test_split_fn("test") + merged = merge_split_labels(sf, test_labels) + df = select_and_cast(merged, CORE_COLUMNS, "core") + write_parquet(df, config_name, "test") + results["test"] = len(df) + + return {config_name: results} + + +def gen_aggregate(split_type): + """Config: cs / cv - all staged + OOPS combined.""" + config_name = split_type + all_labels = pd.concat([load_staged_labels(), load_itw_labels()], ignore_index=True) + results = {} + for split_name, csv_name in [("train", "train"), ("validation", "val"), ("test", "test")]: + sf = staged_split_files(split_type, csv_name) + [ + f"splits/{split_type}/OOPS/{csv_name}.csv" + ] + merged = merge_split_labels(sf, all_labels) + df = select_and_cast(merged, CORE_COLUMNS, "core") + write_parquet(df, config_name, split_name) + results[split_name] = len(df) + return {config_name: results} + + +def gen_individual(ds_name): + """Config: individual dataset with CS splits.""" + labels = load_csv(STAGED_LABEL_FILES[ds_name]) + results = {} + for split_name, csv_name in [("train", "train"), ("validation", "val"), ("test", "test")]: + sf = [f"splits/cs/{ds_name}/{csv_name}.csv"] + merged = merge_split_labels(sf, labels) + df = select_and_cast(merged, CORE_COLUMNS, "core") + write_parquet(df, ds_name, split_name) + results[split_name] = len(df) + return {ds_name: results} + + +# ---- Main ---- + +def main(): + print(f"Generating parquet files in: {PARQUET_DIR}") + PARQUET_DIR.mkdir(parents=True, exist_ok=True) + + all_results = {} + + # Labels configs (single train split) + print("\n--- Labels configs ---") + for gen_fn in [gen_labels, gen_labels_syn, gen_metadata_syn]: + result = gen_fn() + all_results.update(result) + for config, splits in result.items(): + for split, count in splits.items(): + print(f" {config}/{split}: {count} rows") + + # OF-Staged configs + print("\n--- OF-Staged configs ---") + for st in ["cs", "cv"]: + result = gen_of_sta(st) + all_results.update(result) + for config, splits in result.items(): + for split, count in splits.items(): + print(f" {config}/{split}: {count} rows") + + # OF-ItW config + print("\n--- OF-ItW config ---") + result = gen_of_itw() + all_results.update(result) + for config, splits in result.items(): + for split, count in splits.items(): + print(f" {config}/{split}: {count} rows") + + # OF-Syn configs + print("\n--- OF-Syn configs ---") + syn_configs = [ + ("random", "of-syn"), + ("cross_age", "of-syn-cross-age"), + ("cross_ethnicity", "of-syn-cross-ethnicity"), + ("cross_bmi", "of-syn-cross-bmi"), + ] + for split_type, config_name in syn_configs: + result = gen_of_syn(split_type, config_name) + all_results.update(result) + for config, splits in result.items(): + for split, count in splits.items(): + print(f" {config}/{split}: {count} rows") + + # Cross-domain configs + print("\n--- Cross-domain configs ---") + crossdomain_configs = [ + ("of-sta-itw-cs", "cs", "staged", "cs", "itw"), + ("of-sta-itw-cv", "cv", "staged", "cv", "itw"), + ("of-syn-itw", "random", "syn", "cs", "itw"), + ] + for config_name, train_st, train_src, test_st, test_src in crossdomain_configs: + result = gen_crossdomain(config_name, train_st, train_src, test_st, test_src) + all_results.update(result) + for config, splits in result.items(): + for split, count in splits.items(): + print(f" {config}/{split}: {count} rows") + + # Aggregate configs + print("\n--- Aggregate configs ---") + for st in ["cs", "cv"]: + result = gen_aggregate(st) + all_results.update(result) + for config, splits in result.items(): + for split, count in splits.items(): + print(f" {config}/{split}: {count} rows") + + # Individual dataset configs + print("\n--- Individual dataset configs ---") + for ds_name in STAGED_DATASETS: + result = gen_individual(ds_name) + all_results.update(result) + for config, splits in result.items(): + for split, count in splits.items(): + print(f" {config}/{split}: {count} rows") + + # Deprecated aliases (copy parquet files) + print("\n--- Deprecated aliases ---") + for old_name, new_name in DEPRECATED_ALIASES.items(): + result = copy_parquet(new_name, old_name) + for split, count in result.items(): + print(f" {old_name}/{split}: {count} rows (alias of {new_name})") + all_results[old_name] = result + + # Summary + print(f"\n{'='*60}") + print(f"Generated parquet files for {len(all_results)} configs") + total_files = sum(1 for d in PARQUET_DIR.rglob("*.parquet")) + print(f"Total parquet files: {total_files}") + + # Print total size + total_bytes = sum(f.stat().st_size for f in PARQUET_DIR.rglob("*.parquet")) + print(f"Total size: {total_bytes / 1024 / 1024:.1f} MB") + + return all_results + + +if __name__ == "__main__": + main() diff --git a/omnifall_builder.py b/omnifall_builder.py new file mode 100644 index 0000000000000000000000000000000000000000..e9cc4bdd79953a17cc82a43926cd1559d4326545 --- /dev/null +++ b/omnifall_builder.py @@ -0,0 +1,1192 @@ +"""OmniFall: A Unified Benchmark for Staged-to-Wild Fall Detection + +This dataset builder provides unified access to the OmniFall benchmark, which integrates: +- OF-Staged (OF-Sta): 8 public staged fall detection datasets (~14h single-view) +- OF-In-the-Wild (OF-ItW): Curated genuine accident videos from OOPS (~2.7h) +- OF-Synthetic (OF-Syn): 12,000 synthetic videos generated with Wan 2.2 (~17h) + +All components share a 16-class activity taxonomy. Staged datasets use classes 0-9, +while OF-ItW and OF-Syn use the full 0-15 range. +""" + +import os +import warnings +import pandas as pd +import datasets +from datasets import ( + BuilderConfig, + GeneratorBasedBuilder, + Features, + Value, + ClassLabel, + Sequence, + SplitGenerator, + Split, + Video, +) + +_CITATION = """\ +@misc{omnifall, + title={OmniFall: A Unified Staged-to-Wild Benchmark for Human Fall Detection}, + author={David Schneider and Zdravko Marinov and Rafael Baur and Zeyun Zhong and Rodi D\\\"uger and Rainer Stiefelhagen}, + year={2025}, + eprint={2505.19889}, + archivePrefix={arXiv}, + primaryClass={cs.CV}, + url={https://arxiv.org/abs/2505.19889}, +} +""" + +_DESCRIPTION = """\ +OmniFall is a comprehensive benchmark that unifies staged, in-the-wild, and synthetic +fall detection datasets under a common 16-class activity taxonomy. +""" + +_HOMEPAGE = "https://huggingface.co/datasets/simplexsigil2/omnifall" +_LICENSE = "cc-by-nc-4.0" + +# 16 activity classes shared across all components +_ACTIVITY_LABELS = [ + "walk", # 0 + "fall", # 1 + "fallen", # 2 + "sit_down", # 3 + "sitting", # 4 + "lie_down", # 5 + "lying", # 6 + "stand_up", # 7 + "standing", # 8 + "other", # 9 + "kneel_down", # 10 + "kneeling", # 11 + "squat_down", # 12 + "squatting", # 13 + "crawl", # 14 + "jump", # 15 +] + +# Demographic and scene metadata categories (OF-Syn only) +_AGE_GROUPS = [ + "toddlers_1_4", "children_5_12", "teenagers_13_17", + "young_adults_18_34", "middle_aged_35_64", "elderly_65_plus", +] +_GENDERS = ["male", "female"] +_SKIN_TONES = [f"mst{i}" for i in range(1, 11)] +_ETHNICITIES = ["white", "black", "asian", "hispanic_latino", "aian", "nhpi", "mena"] +_BMI_BANDS = ["underweight", "normal", "overweight", "obese"] +_HEIGHT_BANDS = ["short", "avg", "tall"] +_ENVIRONMENTS = ["indoor", "outdoor"] +_CAMERA_ELEVATIONS = ["eye", "low", "high", "top"] +_CAMERA_AZIMUTHS = ["front", "rear", "left", "right"] +_CAMERA_DISTANCES = ["medium", "far"] +_CAMERA_SHOTS = ["static_wide", "static_medium_wide"] +_SPEEDS = ["24fps_rt", "25fps_rt", "30fps_rt", "std_rt"] + +# The 8 staged datasets +_STAGED_DATASETS = [ + "caucafall", "cmdfall", "edf", "gmdcsa24", + "le2i", "mcfd", "occu", "up_fall", +] + +# Label CSV file paths (relative to repo root) +_STAGED_LABEL_FILES = [f"labels/{name}.csv" for name in [ + "caucafall", "cmdfall", "edf", "GMDCSA24", + "le2i", "mcfd", "occu", "up_fall", +]] +_ITW_LABEL_FILE = "labels/OOPS.csv" +_SYN_LABEL_FILE = "labels/of-syn.csv" +_SYN_VIDEO_ARCHIVE = "data_files/omnifall-synthetic_av1.tar" + +# OOPS video auto-download configuration +_OOPS_CACHE_DIR = os.path.join(os.path.expanduser("~"), ".cache", "omnifall", "oops_prepared") +_OOPS_URL = "https://oops.cs.columbia.edu/data/video_and_anns.tar.gz" +_OOPS_EXPECTED_VIDEO_COUNT = 818 +_OOPS_MAPPING_FILE = "data_files/oops_video_mapping.csv" + +_OOPS_LICENSE_TEXT = """\ +========================================================================== +OOPS Dataset License Notice +========================================================================== + +The OF-ItW component of OmniFall uses videos from the OOPS dataset. +The following notice is from the OOPS dataset website +(https://oops.cs.columbia.edu/data/): + + "By pressing any of the links above, you acknowledge that we do not + own the copyright to these videos and that they are solely provided + for non-commercial research and/or educational purposes. This dataset + is licensed under a Creative Commons Attribution-NonCommercial- + ShareAlike 4.0 International License." + +If you use OF-ItW in your research, please also cite the OOPS paper: + + @inproceedings{{epstein2020oops, + title={{Oops! predicting unintentional action in video}}, + author={{Epstein, Dave and Chen, Boyuan and Vondrick, Carl}}, + booktitle={{Proceedings of the IEEE/CVF Conference on Computer + Vision and Pattern Recognition}}, + pages={{919--929}}, + year={{2020}} + }} + +The download will stream ~45GB from the OOPS website and extract {count} +videos (~2.6GB disk space) to: {cache_dir} +========================================================================== +""" + + +# ---- Feature schema definitions ---- + +def _core_features(): + """7-column schema for staged/OOPS data.""" + return Features({ + "path": Value("string"), + "label": ClassLabel(num_classes=16, names=_ACTIVITY_LABELS), + "start": Value("float32"), + "end": Value("float32"), + "subject": Value("int32"), + "cam": Value("int32"), + "dataset": Value("string"), + }) + + +def _syn_features(): + """19-column schema for synthetic data (core + demographic/scene metadata).""" + return Features({ + "path": Value("string"), + "label": ClassLabel(num_classes=16, names=_ACTIVITY_LABELS), + "start": Value("float32"), + "end": Value("float32"), + "subject": Value("int32"), + "cam": Value("int32"), + "dataset": Value("string"), + # Demographic metadata + "age_group": ClassLabel(num_classes=6, names=_AGE_GROUPS), + "gender_presentation": ClassLabel(num_classes=2, names=_GENDERS), + "monk_skin_tone": ClassLabel(num_classes=10, names=_SKIN_TONES), + "race_ethnicity_omb": ClassLabel(num_classes=7, names=_ETHNICITIES), + "bmi_band": ClassLabel(num_classes=4, names=_BMI_BANDS), + "height_band": ClassLabel(num_classes=3, names=_HEIGHT_BANDS), + # Scene metadata + "environment_category": ClassLabel(num_classes=2, names=_ENVIRONMENTS), + "camera_shot": ClassLabel(num_classes=2, names=_CAMERA_SHOTS), + "speed": ClassLabel(num_classes=4, names=_SPEEDS), + "camera_elevation": ClassLabel(num_classes=4, names=_CAMERA_ELEVATIONS), + "camera_azimuth": ClassLabel(num_classes=4, names=_CAMERA_AZIMUTHS), + "camera_distance": ClassLabel(num_classes=2, names=_CAMERA_DISTANCES), + }) + + +def _syn_metadata_features(): + """Feature schema for OF-Syn metadata config (video-level, no temporal segments).""" + return Features({ + "path": Value("string"), + "dataset": Value("string"), + "age_group": ClassLabel(num_classes=6, names=_AGE_GROUPS), + "gender_presentation": ClassLabel(num_classes=2, names=_GENDERS), + "monk_skin_tone": ClassLabel(num_classes=10, names=_SKIN_TONES), + "race_ethnicity_omb": ClassLabel(num_classes=7, names=_ETHNICITIES), + "bmi_band": ClassLabel(num_classes=4, names=_BMI_BANDS), + "height_band": ClassLabel(num_classes=3, names=_HEIGHT_BANDS), + "environment_category": ClassLabel(num_classes=2, names=_ENVIRONMENTS), + "camera_shot": ClassLabel(num_classes=2, names=_CAMERA_SHOTS), + "speed": ClassLabel(num_classes=4, names=_SPEEDS), + "camera_elevation": ClassLabel(num_classes=4, names=_CAMERA_ELEVATIONS), + "camera_azimuth": ClassLabel(num_classes=4, names=_CAMERA_AZIMUTHS), + "camera_distance": ClassLabel(num_classes=2, names=_CAMERA_DISTANCES), + }) + + +def _syn_framewise_features(): + """Feature schema for OF-Syn frame-wise labels (81 labels per video).""" + return Features({ + "path": Value("string"), + "dataset": Value("string"), + "frame_labels": Sequence( + ClassLabel(num_classes=16, names=_ACTIVITY_LABELS), length=81 + ), + "age_group": ClassLabel(num_classes=6, names=_AGE_GROUPS), + "gender_presentation": ClassLabel(num_classes=2, names=_GENDERS), + "monk_skin_tone": ClassLabel(num_classes=10, names=_SKIN_TONES), + "race_ethnicity_omb": ClassLabel(num_classes=7, names=_ETHNICITIES), + "bmi_band": ClassLabel(num_classes=4, names=_BMI_BANDS), + "height_band": ClassLabel(num_classes=3, names=_HEIGHT_BANDS), + "environment_category": ClassLabel(num_classes=2, names=_ENVIRONMENTS), + "camera_shot": ClassLabel(num_classes=2, names=_CAMERA_SHOTS), + "speed": ClassLabel(num_classes=4, names=_SPEEDS), + "camera_elevation": ClassLabel(num_classes=4, names=_CAMERA_ELEVATIONS), + "camera_azimuth": ClassLabel(num_classes=4, names=_CAMERA_AZIMUTHS), + "camera_distance": ClassLabel(num_classes=2, names=_CAMERA_DISTANCES), + }) + + +def _paths_only_features(): + """Minimal feature schema for paths-only mode.""" + return Features({"path": Value("string")}) + + +# ---- Config ---- + +class OmniFallConfig(BuilderConfig): + """BuilderConfig for OmniFall dataset. + + Args: + config_type: What kind of data to load. + "labels" - All labels in a single split (no train/val/test). + "split" - Train/val/test splits from split CSV files. + "metadata" - Video-level metadata (OF-Syn only). + "framewise" - Frame-wise HDF5 labels (OF-Syn only). + data_source: Which component(s) to load. + "staged" - 8 staged lab datasets + "itw" - OOPS in-the-wild + "syn" - OF-Syn synthetic + "staged+itw" - Staged and OOPS combined + Individual dataset names (e.g. "cmdfall") for single datasets. + split_type: Split strategy. + "cs" / "cv" for staged/OOPS, "random" / "cross_age" / etc. for synthetic. + train_source: For cross-domain configs, overrides data_source for train/val. + test_source: For cross-domain configs, overrides data_source for test. + test_split_type: For cross-domain configs, overrides split_type for test. + paths_only: If True, only return video paths (no label merging). + framewise: If True, load frame-wise labels from HDF5 (OF-Syn only). + include_video: If True, download and include video files. + For OF-Syn configs, videos are downloaded from the HF repo. + For OF-ItW configs, requires oops_video_dir to be set. + decode_video: If True (default), use Video() feature for auto-decoding. + If False, return absolute file path as string. + oops_video_dir: Path to directory containing prepared OOPS videos + (produced by prepare_oops_videos.py). Required when loading + OF-ItW configs with include_video=True. + deprecated_alias_for: If set, this config is a deprecated alias. + """ + + def __init__( + self, + config_type="labels", + data_source="staged+itw", + split_type=None, + train_source=None, + test_source=None, + test_split_type=None, + paths_only=False, + framewise=False, + include_video=False, + decode_video=True, + oops_video_dir=None, + deprecated_alias_for=None, + **kwargs, + ): + super().__init__(**kwargs) + self.config_type = config_type + self.data_source = data_source + self.split_type = split_type + self.train_source = train_source + self.test_source = test_source + self.test_split_type = test_split_type + self.paths_only = paths_only + self.framewise = framewise + self.include_video = include_video + self.decode_video = decode_video + self.oops_video_dir = oops_video_dir + self.deprecated_alias_for = deprecated_alias_for + + @property + def is_crossdomain(self): + return self.train_source is not None + + +def _make_config(name, description, **kwargs): + """Helper to create a config with consistent version.""" + return OmniFallConfig( + name=name, + version=datasets.Version("2.0.0"), + description=description, + **kwargs, + ) + + +# ---- Config definitions ---- + +_LABELS_CONFIGS = [ + _make_config( + "labels", + "All staged + OOPS labels (52k segments, 7 columns). Default config.", + config_type="labels", + data_source="staged+itw", + ), + _make_config( + "labels-syn", + "OF-Syn labels with demographic metadata (19k segments, 19 columns).", + config_type="labels", + data_source="syn", + ), + _make_config( + "metadata-syn", + "OF-Syn video-level metadata (12k videos, no temporal segments).", + config_type="metadata", + data_source="syn", + ), + _make_config( + "framewise-syn", + "OF-Syn frame-wise labels from HDF5 (81 labels per video).", + config_type="framewise", + data_source="syn", + framewise=True, + ), +] + +_AGGREGATE_CONFIGS = [ + _make_config( + "cs", + "Cross-subject splits for all staged + OOPS datasets combined.", + config_type="split", + data_source="staged+itw", + split_type="cs", + ), + _make_config( + "cv", + "Cross-view splits for all staged + OOPS datasets combined.", + config_type="split", + data_source="staged+itw", + split_type="cv", + ), +] + +_PRIMARY_CONFIGS = [ + _make_config( + "of-sta-cs", + "OF-Staged: 8 staged datasets, cross-subject splits.", + config_type="split", + data_source="staged", + split_type="cs", + ), + _make_config( + "of-sta-cv", + "OF-Staged: 8 staged datasets, cross-view splits.", + config_type="split", + data_source="staged", + split_type="cv", + ), + _make_config( + "of-itw", + "OF-ItW: OOPS-Fall in-the-wild genuine accidents.", + config_type="split", + data_source="itw", + split_type="cs", + ), + _make_config( + "of-syn", + "OF-Syn: synthetic, random 80/10/10 split.", + config_type="split", + data_source="syn", + split_type="random", + ), + _make_config( + "of-syn-cross-age", + "OF-Syn: cross-age split (train: adults, test: children/elderly).", + config_type="split", + data_source="syn", + split_type="cross_age", + ), + _make_config( + "of-syn-cross-ethnicity", + "OF-Syn: cross-ethnicity split.", + config_type="split", + data_source="syn", + split_type="cross_ethnicity", + ), + _make_config( + "of-syn-cross-bmi", + "OF-Syn: cross-BMI split (train: normal/underweight, test: obese).", + config_type="split", + data_source="syn", + split_type="cross_bmi", + ), +] + +_CROSSDOMAIN_CONFIGS = [ + _make_config( + "of-sta-itw-cs", + "Cross-domain: train/val on staged CS, test on OOPS.", + config_type="split", + data_source="staged", + split_type="cs", + train_source="staged", + test_source="itw", + test_split_type="cs", + ), + _make_config( + "of-sta-itw-cv", + "Cross-domain: train/val on staged CV, test on OOPS.", + config_type="split", + data_source="staged", + split_type="cv", + train_source="staged", + test_source="itw", + test_split_type="cv", + ), + _make_config( + "of-syn-itw", + "Cross-domain: train/val on OF-Syn random, test on OOPS.", + config_type="split", + data_source="syn", + split_type="random", + train_source="syn", + test_source="itw", + test_split_type="cs", + ), +] + +_INDIVIDUAL_CONFIGS = [ + _make_config( + name, + f"{name} dataset with cross-subject splits.", + config_type="split", + data_source=name, + split_type="cs", + ) + for name in _STAGED_DATASETS +] + +# Deprecated aliases: defined with full correct attributes so _info() works +# immediately (HF calls _info() during __init__, before any custom init code). +_DEPRECATED_ALIASES = { + "cs-staged": "of-sta-cs", + "cv-staged": "of-sta-cv", + "cs-staged-wild": "of-sta-itw-cs", + "cv-staged-wild": "of-sta-itw-cv", + "OOPS": "of-itw", +} + +# Build a lookup from config name to config object +_ALL_NAMED_CONFIGS = { + cfg.name: cfg + for cfg in ( + _LABELS_CONFIGS + _AGGREGATE_CONFIGS + _PRIMARY_CONFIGS + + _CROSSDOMAIN_CONFIGS + _INDIVIDUAL_CONFIGS + ) +} + +_DEPRECATED_CONFIGS = [] +for _old_name, _new_name in _DEPRECATED_ALIASES.items(): + _target = _ALL_NAMED_CONFIGS[_new_name] + _DEPRECATED_CONFIGS.append( + _make_config( + _old_name, + f"DEPRECATED: Use '{_new_name}' instead.", + config_type=_target.config_type, + data_source=_target.data_source, + split_type=_target.split_type, + train_source=_target.train_source, + test_source=_target.test_source, + test_split_type=_target.test_split_type, + paths_only=_target.paths_only, + framewise=_target.framewise, + include_video=_target.include_video, + decode_video=_target.decode_video, + oops_video_dir=_target.oops_video_dir, + deprecated_alias_for=_new_name, + ) + ) + + +# ---- Builder ---- + +class OmniFall(GeneratorBasedBuilder): + """OmniFall unified fall detection benchmark builder.""" + + VERSION = datasets.Version("2.0.0") + BUILDER_CONFIG_CLASS = OmniFallConfig + + BUILDER_CONFIGS = ( + _LABELS_CONFIGS + + _AGGREGATE_CONFIGS + + _PRIMARY_CONFIGS + + _CROSSDOMAIN_CONFIGS + + _INDIVIDUAL_CONFIGS + + _DEPRECATED_CONFIGS + ) + + DEFAULT_CONFIG_NAME = "labels" + + def _info(self): + """Return dataset metadata and feature schema.""" + cfg = self.config + + if cfg.config_type == "metadata": + features = _syn_metadata_features() + elif cfg.framewise: + features = _syn_framewise_features() + elif cfg.paths_only: + features = _paths_only_features() + elif cfg.is_crossdomain: + # Cross-domain configs mix sources, use common 7-col schema + features = _core_features() + elif cfg.data_source == "syn": + features = _syn_features() + else: + features = _core_features() + + if cfg.include_video: + features["video"] = Video() if cfg.decode_video else Value("string") + + return datasets.DatasetInfo( + description=_DESCRIPTION, + features=features, + homepage=_HOMEPAGE, + license=_LICENSE, + citation=_CITATION, + ) + + # ---- Split generators ---- + + def _split_generators(self, dl_manager): + cfg = self.config + + # Emit deprecation warning + if cfg.deprecated_alias_for: + warnings.warn( + f"Config '{cfg.name}' is deprecated. " + f"Use '{cfg.deprecated_alias_for}' instead.", + DeprecationWarning, + stacklevel=2, + ) + + # Labels configs: all data in a single "train" split + if cfg.config_type == "labels": + return self._labels_splits(cfg, dl_manager) + + # Metadata config + if cfg.config_type == "metadata": + metadata_path = dl_manager.download("videos/metadata.csv") + return [ + SplitGenerator( + name=Split.TRAIN, + gen_kwargs={"mode": "metadata", "metadata_path": metadata_path}, + ), + ] + + # Framewise config (no split, all data) + if cfg.config_type == "framewise": + archive_path = dl_manager.download_and_extract( + "data_files/syn_frame_wise_labels.tar.zst" + ) + metadata_path = dl_manager.download("videos/metadata.csv") + return [ + SplitGenerator( + name=Split.TRAIN, + gen_kwargs={ + "mode": "framewise", + "hdf5_dir": archive_path, + "metadata_path": metadata_path, + "split_file": None, + }, + ), + ] + + # Split configs (train/val/test) + if cfg.config_type == "split": + return self._split_config_generators(cfg, dl_manager) + + raise ValueError(f"Unknown config_type: {cfg.config_type}") + + def _labels_splits(self, cfg, dl_manager): + """Generate split generators for labels-type configs.""" + if cfg.data_source == "syn": + filepath = dl_manager.download(_SYN_LABEL_FILE) + return [ + SplitGenerator( + name=Split.TRAIN, + gen_kwargs={"mode": "csv_direct", "filepath": filepath}, + ), + ] + elif cfg.data_source == "staged+itw": + filepaths = dl_manager.download(_STAGED_LABEL_FILES + [_ITW_LABEL_FILE]) + return [ + SplitGenerator( + name=Split.TRAIN, + gen_kwargs={"mode": "csv_multi", "filepaths": filepaths}, + ), + ] + else: + raise ValueError(f"Unsupported data_source for labels: {cfg.data_source}") + + def _split_config_generators(self, cfg, dl_manager): + """Generate split generators for train/val/test split configs.""" + if cfg.is_crossdomain: + return self._crossdomain_splits(cfg, dl_manager) + + if cfg.data_source == "syn": + return self._syn_splits(cfg, dl_manager) + elif cfg.data_source == "staged": + return self._staged_splits(cfg, dl_manager) + elif cfg.data_source == "itw": + return self._itw_splits(cfg, dl_manager) + elif cfg.data_source == "staged+itw": + return self._aggregate_splits(cfg, dl_manager) + elif cfg.data_source in _STAGED_DATASETS: + return self._individual_splits(cfg, dl_manager) + else: + raise ValueError(f"Unknown data_source: {cfg.data_source}") + + def _staged_split_files(self, split_type, split_name): + """Return list of split CSV paths for all 8 staged datasets.""" + return [f"splits/{split_type}/{ds}/{split_name}.csv" for ds in _STAGED_DATASETS] + + def _resolve_oops_video_dir(self, cfg, dl_manager): + """Resolve the OOPS video directory for OF-ItW configs. + + Priority: + 1. If include_video is False, return None. + 2. If oops_video_dir is explicitly provided, validate and return it. + 3. If cache exists with expected video count, return cache path. + 4. Otherwise, prompt for license consent and auto-download. + """ + if not cfg.include_video: + return None + + # User explicitly provided a directory + if cfg.oops_video_dir: + video_dir = os.path.abspath(cfg.oops_video_dir) + if not os.path.isdir(video_dir): + raise FileNotFoundError( + f"oops_video_dir does not exist: {video_dir}\n" + "Run prepare_oops_videos.py to prepare OOPS videos first." + ) + return video_dir + + # Check cache + cache_dir = _OOPS_CACHE_DIR + if self._oops_cache_is_valid(cache_dir): + return cache_dir + + # Auto-download: prompt for consent and extract + return self._auto_prepare_oops(cache_dir, dl_manager) + + def _oops_cache_is_valid(self, cache_dir): + """Check if the OOPS video cache contains the expected number of videos.""" + falls_dir = os.path.join(cache_dir, "falls") + if not os.path.isdir(falls_dir): + return False + mp4_count = sum(1 for f in os.listdir(falls_dir) if f.endswith(".mp4")) + if mp4_count >= _OOPS_EXPECTED_VIDEO_COUNT: + return True + if mp4_count > 0: + warnings.warn( + f"OOPS cache at {cache_dir} contains {mp4_count}/{_OOPS_EXPECTED_VIDEO_COUNT} " + f"videos (incomplete). Will re-download." + ) + return False + + def _auto_prepare_oops(self, cache_dir, dl_manager): + """Download and prepare OOPS videos with interactive license consent.""" + import csv + import subprocess + import tarfile + + # Print license and get consent + print(_OOPS_LICENSE_TEXT.format( + count=_OOPS_EXPECTED_VIDEO_COUNT, cache_dir=cache_dir, + )) + try: + response = input('Type "YES" to accept the license and begin download: ') + except EOFError: + raise RuntimeError( + "Cannot prompt for OOPS license consent in non-interactive mode.\n" + "Either run prepare_oops_videos.py manually and pass oops_video_dir,\n" + "or run this script in an interactive terminal." + ) + + if response.strip() != "YES": + raise RuntimeError( + "OOPS license not accepted. To load OF-ItW with videos, either:\n" + "1. Run again and type YES when prompted, or\n" + "2. Run prepare_oops_videos.py manually and pass oops_video_dir." + ) + + # Download the mapping file from the HF repo + mapping_path = dl_manager.download(_OOPS_MAPPING_FILE) + mapping = {} + with open(mapping_path) as f: + reader = csv.DictReader(f) + for row in reader: + mapping[row["oops_path"]] = row["itw_path"] + + # Create output directory + os.makedirs(os.path.join(cache_dir, "falls"), exist_ok=True) + + # Extract videos + found = self._extract_oops_videos(_OOPS_URL, mapping, cache_dir) + + if found == 0: + raise RuntimeError( + "Failed to extract any OOPS videos. Check network connectivity " + "and try again, or use prepare_oops_videos.py with a local archive." + ) + + if found < _OOPS_EXPECTED_VIDEO_COUNT: + warnings.warn( + f"Only extracted {found}/{_OOPS_EXPECTED_VIDEO_COUNT} OOPS videos. " + f"Some videos may be missing from the archive." + ) + + return cache_dir + + def _extract_oops_videos(self, source, mapping, output_dir): + """Stream through the OOPS archive and extract matching videos.""" + import subprocess + import tarfile + + total = len(mapping) + print(f"Extracting {total} videos from OOPS archive...") + print("(Streaming ~45GB from web, no local disk space needed for archive)") + print("(This may take 30-60 minutes depending on connection speed)") + + os.makedirs(os.path.join(output_dir, "falls"), exist_ok=True) + + found = 0 + remaining = set(mapping.keys()) + + cmd = f'curl -sL "{source}" | tar -xzf - --to-stdout "oops_dataset/video.tar.gz"' + proc = subprocess.Popen( + cmd, shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE, + ) + + try: + with tarfile.open(fileobj=proc.stdout, mode="r|gz") as tar: + for member in tar: + if not remaining: + break + if member.name in remaining: + itw_path = mapping[member.name] + out_path = os.path.join(output_dir, itw_path) + + f = tar.extractfile(member) + if f is not None: + with open(out_path, "wb") as out_f: + while True: + chunk = f.read(1024 * 1024) + if not chunk: + break + out_f.write(chunk) + f.close() + found += 1 + remaining.discard(member.name) + if found % 50 == 0: + print(f" Extracted {found}/{total} videos...") + finally: + proc.stdout.close() + proc.wait() + + print(f"Extracted {found}/{total} videos to {output_dir}") + if remaining: + print(f"WARNING: {len(remaining)} videos not found in archive.") + + return found + + def _make_split_merge_generators(self, split_files_per_split, label_files, + dl_manager, video_dir=None): + """Helper to create train/val/test SplitGenerators for split_merge mode. + + Args: + split_files_per_split: dict mapping split name to list of relative paths. + label_files: list of relative label file paths. + dl_manager: download manager for resolving paths. + video_dir: path to extracted video directory, or None. + """ + resolved_labels = dl_manager.download(label_files) + return [ + SplitGenerator( + name=split_enum, + gen_kwargs={ + "mode": "split_merge", + "split_files": dl_manager.download(split_files_per_split[csv_name]), + "label_files": resolved_labels, + "video_dir": video_dir, + }, + ) + for split_enum, csv_name in [ + (Split.TRAIN, "train"), + (Split.VALIDATION, "val"), + (Split.TEST, "test"), + ] + ] + + def _staged_splits(self, cfg, dl_manager): + """OF-Staged: 8 datasets combined with CS or CV splits.""" + st = cfg.split_type + return self._make_split_merge_generators( + {sn: self._staged_split_files(st, sn) for sn in ("train", "val", "test")}, + _STAGED_LABEL_FILES, + dl_manager, + ) + + def _itw_splits(self, cfg, dl_manager): + """OF-ItW: OOPS-Fall (CS=CV identical).""" + st = cfg.split_type + video_dir = self._resolve_oops_video_dir(cfg, dl_manager) + return self._make_split_merge_generators( + {sn: [f"splits/{st}/OOPS/{sn}.csv"] for sn in ("train", "val", "test")}, + [_ITW_LABEL_FILE], + dl_manager, + video_dir=video_dir, + ) + + def _aggregate_splits(self, cfg, dl_manager): + """All staged + OOPS combined (cs or cv).""" + st = cfg.split_type + all_labels = _STAGED_LABEL_FILES + [_ITW_LABEL_FILE] + return self._make_split_merge_generators( + {sn: self._staged_split_files(st, sn) + [f"splits/{st}/OOPS/{sn}.csv"] + for sn in ("train", "val", "test")}, + all_labels, + dl_manager, + ) + + def _individual_splits(self, cfg, dl_manager): + """Individual dataset with CS splits.""" + ds_name = cfg.data_source + label_file_map = { + "caucafall": "labels/caucafall.csv", + "cmdfall": "labels/cmdfall.csv", + "edf": "labels/edf.csv", + "gmdcsa24": "labels/GMDCSA24.csv", + "le2i": "labels/le2i.csv", + "mcfd": "labels/mcfd.csv", + "occu": "labels/occu.csv", + "up_fall": "labels/up_fall.csv", + } + label_file = label_file_map[ds_name] + st = cfg.split_type + return self._make_split_merge_generators( + {sn: [f"splits/{st}/{ds_name}/{sn}.csv"] for sn in ("train", "val", "test")}, + [label_file], + dl_manager, + ) + + def _syn_splits(self, cfg, dl_manager): + """OF-Syn split strategies.""" + st = cfg.split_type + split_dir = f"splits/syn/{st}" + + # Download video archive if requested + video_dir = None + if cfg.include_video: + video_dir = dl_manager.download_and_extract(_SYN_VIDEO_ARCHIVE) + + if cfg.framewise: + archive_path = dl_manager.download_and_extract( + "data_files/syn_frame_wise_labels.tar.zst" + ) + metadata_path = dl_manager.download("videos/metadata.csv") + split_files = dl_manager.download( + {sn: f"{split_dir}/{sn}.csv" for sn in ("train", "val", "test")} + ) + return [ + SplitGenerator( + name=split_enum, + gen_kwargs={ + "mode": "framewise", + "hdf5_dir": archive_path, + "metadata_path": metadata_path, + "split_file": split_files[csv_name], + }, + ) + for split_enum, csv_name in [ + (Split.TRAIN, "train"), + (Split.VALIDATION, "val"), + (Split.TEST, "test"), + ] + ] + + if cfg.paths_only: + split_files = dl_manager.download( + {sn: f"{split_dir}/{sn}.csv" for sn in ("train", "val", "test")} + ) + return [ + SplitGenerator( + name=split_enum, + gen_kwargs={ + "mode": "paths_only", + "split_file": split_files[csv_name], + }, + ) + for split_enum, csv_name in [ + (Split.TRAIN, "train"), + (Split.VALIDATION, "val"), + (Split.TEST, "test"), + ] + ] + + return self._make_split_merge_generators( + {sn: [f"{split_dir}/{sn}.csv"] for sn in ("train", "val", "test")}, + [_SYN_LABEL_FILE], + dl_manager, + video_dir=video_dir, + ) + + def _crossdomain_splits(self, cfg, dl_manager): + """Cross-domain configs: train/val from one source, test from another.""" + train_st = cfg.split_type + test_st = cfg.test_split_type or "cs" + + # Resolve video directories for each source + train_video_dir = None + if cfg.include_video and cfg.train_source == "syn": + train_video_dir = dl_manager.download_and_extract(_SYN_VIDEO_ARCHIVE) + + test_video_dir = None + if cfg.include_video and cfg.test_source == "itw": + test_video_dir = self._resolve_oops_video_dir(cfg, dl_manager) + + # Determine train/val files and labels + if cfg.train_source == "staged": + train_split_files = { + sn: self._staged_split_files(train_st, sn) + for sn in ("train", "val") + } + train_labels = _STAGED_LABEL_FILES + elif cfg.train_source == "syn": + train_split_files = { + sn: [f"splits/syn/{train_st}/{sn}.csv"] + for sn in ("train", "val") + } + train_labels = [_SYN_LABEL_FILE] + else: + raise ValueError(f"Unsupported train_source: {cfg.train_source}") + + # Determine test files and labels + if cfg.test_source == "itw": + test_split_files = [f"splits/{test_st}/OOPS/test.csv"] + test_labels = [_ITW_LABEL_FILE] + else: + raise ValueError(f"Unsupported test_source: {cfg.test_source}") + + # Download all paths + resolved_train_labels = dl_manager.download(train_labels) + resolved_test_labels = dl_manager.download(test_labels) + resolved_test_splits = dl_manager.download(test_split_files) + + return [ + SplitGenerator( + name=Split.TRAIN, + gen_kwargs={ + "mode": "split_merge", + "split_files": dl_manager.download(train_split_files["train"]), + "label_files": resolved_train_labels, + "video_dir": train_video_dir, + }, + ), + SplitGenerator( + name=Split.VALIDATION, + gen_kwargs={ + "mode": "split_merge", + "split_files": dl_manager.download(train_split_files["val"]), + "label_files": resolved_train_labels, + "video_dir": train_video_dir, + }, + ), + SplitGenerator( + name=Split.TEST, + gen_kwargs={ + "mode": "split_merge", + "split_files": resolved_test_splits, + "label_files": resolved_test_labels, + "video_dir": test_video_dir, + }, + ), + ] + + # ---- Example generators ---- + + def _generate_examples(self, mode, **kwargs): + """Dispatch to the appropriate generator based on mode.""" + if mode == "csv_direct": + yield from self._gen_csv_direct(**kwargs) + elif mode == "csv_multi": + yield from self._gen_csv_multi(**kwargs) + elif mode == "split_merge": + yield from self._gen_split_merge(**kwargs) + elif mode == "metadata": + yield from self._gen_metadata(**kwargs) + elif mode == "framewise": + yield from self._gen_framewise(**kwargs) + elif mode == "paths_only": + yield from self._gen_paths_only(**kwargs) + else: + raise ValueError(f"Unknown generation mode: {mode}") + + def _gen_csv_direct(self, filepath): + """Load a single CSV file directly.""" + df = pd.read_csv(filepath) + for idx, row in df.iterrows(): + yield idx, self._row_to_example(row) + + def _gen_csv_multi(self, filepaths): + """Load and concatenate multiple CSV files.""" + dfs = [pd.read_csv(fp) for fp in filepaths] + df = pd.concat(dfs, ignore_index=True) + for idx, row in df.iterrows(): + yield idx, self._row_to_example(row) + + def _gen_split_merge(self, split_files, label_files, video_dir=None): + """Load split paths, merge with labels, yield examples.""" + split_dfs = [pd.read_csv(sf) for sf in split_files] + split_df = pd.concat(split_dfs, ignore_index=True) + + if self.config.paths_only: + for idx, row in split_df.iterrows(): + yield idx, {"path": row["path"]} + return + + label_dfs = [pd.read_csv(lf) for lf in label_files] + labels_df = pd.concat(label_dfs, ignore_index=True) + + merged_df = pd.merge(split_df, labels_df, on="path", how="left") + + for idx, row in merged_df.iterrows(): + example = self._row_to_example(row) + if video_dir is not None: + example["video"] = os.path.join(video_dir, row["path"] + ".mp4") + yield idx, example + + def _gen_metadata(self, metadata_path): + """Load OF-Syn video-level metadata.""" + df = pd.read_csv(metadata_path) + metadata_cols = [ + "path", "age_group", "gender_presentation", "monk_skin_tone", + "race_ethnicity_omb", "bmi_band", "height_band", + "environment_category", "camera_shot", "speed", + "camera_elevation", "camera_azimuth", "camera_distance", + ] + available_cols = [c for c in metadata_cols if c in df.columns] + df = df[available_cols].drop_duplicates(subset=["path"]).reset_index(drop=True) + df["dataset"] = "of-syn" + + for idx, row in df.iterrows(): + yield idx, self._row_to_example(row) + + def _gen_framewise(self, hdf5_dir, metadata_path, split_file=None): + """Load frame-wise labels from HDF5 files with metadata.""" + import h5py + import tarfile + from pathlib import Path + + metadata_df = pd.read_csv(metadata_path) + + valid_paths = None + if split_file is not None: + split_df = pd.read_csv(split_file) + valid_paths = set(split_df["path"].tolist()) + + hdf5_path = Path(hdf5_dir) + metadata_fields = [ + "age_group", "gender_presentation", "monk_skin_tone", + "race_ethnicity_omb", "bmi_band", "height_band", + "environment_category", "camera_shot", "speed", + "camera_elevation", "camera_azimuth", "camera_distance", + ] + + if hdf5_path.is_file() and ( + hdf5_path.suffix == ".tar" or tarfile.is_tarfile(str(hdf5_path)) + ): + idx = 0 + with tarfile.open(hdf5_path, "r") as tar: + for member in tar.getmembers(): + if not member.name.endswith(".h5"): + continue + video_path = member.name.lstrip("./").replace(".h5", "") + if valid_paths is not None and video_path not in valid_paths: + continue + try: + h5_file = tar.extractfile(member) + if h5_file is None: + continue + import tempfile + with tempfile.NamedTemporaryFile(suffix=".h5", delete=True) as tmp: + tmp.write(h5_file.read()) + tmp.flush() + with h5py.File(tmp.name, "r") as f: + frame_labels = f["label_indices"][:].tolist() + video_metadata = metadata_df[metadata_df["path"] == video_path] + if len(video_metadata) == 0: + continue + video_meta = video_metadata.iloc[0] + example = { + "path": video_path, + "dataset": "of-syn", + "frame_labels": frame_labels, + } + for field in metadata_fields: + if field in video_meta and pd.notna(video_meta[field]): + example[field] = str(video_meta[field]) + yield idx, example + idx += 1 + except Exception as e: + warnings.warn(f"Failed to process {member.name}: {e}") + continue + else: + hdf5_files = sorted(hdf5_path.glob("**/*.h5")) + idx = 0 + for h5_file_path in hdf5_files: + relative_path = h5_file_path.relative_to(hdf5_path) + video_path = str(relative_path.with_suffix("")) + if valid_paths is not None and video_path not in valid_paths: + continue + try: + with h5py.File(h5_file_path, "r") as f: + frame_labels = f["label_indices"][:].tolist() + video_metadata = metadata_df[metadata_df["path"] == video_path] + if len(video_metadata) == 0: + continue + video_meta = video_metadata.iloc[0] + example = { + "path": video_path, + "dataset": "of-syn", + "frame_labels": frame_labels, + } + for field in metadata_fields: + if field in video_meta and pd.notna(video_meta[field]): + example[field] = str(video_meta[field]) + yield idx, example + idx += 1 + except Exception as e: + warnings.warn(f"Failed to process {h5_file_path}: {e}") + continue + + def _gen_paths_only(self, split_file): + """Load paths only from a split file.""" + df = pd.read_csv(split_file) + for idx, row in df.iterrows(): + yield idx, {"path": row["path"]} + + def _row_to_example(self, row): + """Convert a DataFrame row to a typed example dict. + + Only includes fields present in the row. HuggingFace's Features.encode_example() + will ignore extra fields and fill missing optional fields. + """ + example = {"path": str(row["path"])} + + # Core temporal fields + for field, dtype in [ + ("label", int), ("start", float), ("end", float), + ("subject", int), ("cam", int), + ]: + if field in row.index and pd.notna(row[field]): + example[field] = dtype(row[field]) + + if "dataset" in row.index and pd.notna(row["dataset"]): + example["dataset"] = str(row["dataset"]) + + # Demographic and scene metadata (present only for syn data) + for field in [ + "age_group", "gender_presentation", "monk_skin_tone", + "race_ethnicity_omb", "bmi_band", "height_band", + "environment_category", "camera_shot", "speed", + "camera_elevation", "camera_azimuth", "camera_distance", + ]: + if field in row.index and pd.notna(row[field]): + example[field] = str(row[field]) + + return example diff --git a/parquet/OOPS/test-00000-of-00001.parquet b/parquet/OOPS/test-00000-of-00001.parquet new file mode 100644 index 0000000000000000000000000000000000000000..e6b6dca16bbcaf06e134d6f3fee2f8dc15d8a6fa --- /dev/null +++ b/parquet/OOPS/test-00000-of-00001.parquet @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:bd496c163abcb617430940104ee715f6acb5ed6dd2aea7af34b9f3e057bb56e7 +size 46279 diff --git a/parquet/OOPS/train-00000-of-00001.parquet b/parquet/OOPS/train-00000-of-00001.parquet new file mode 100644 index 0000000000000000000000000000000000000000..7aba2d5963ef13d5b9f4ad002ab149ecb802777b --- /dev/null +++ b/parquet/OOPS/train-00000-of-00001.parquet @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:439de7cca631ff21f27fa266407b0d7912a73c91e4d777fcc27f02690d65aa2c +size 17402 diff --git a/parquet/OOPS/validation-00000-of-00001.parquet b/parquet/OOPS/validation-00000-of-00001.parquet new file mode 100644 index 0000000000000000000000000000000000000000..6a03a653de257b228b0bac029ed971043f361aa6 --- /dev/null +++ b/parquet/OOPS/validation-00000-of-00001.parquet @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:cf764c5161cff04ad4183ecbc344c202bee4bd7d0bf4f6e1d019e78be843d204 +size 11006 diff --git a/parquet/caucafall/test-00000-of-00001.parquet b/parquet/caucafall/test-00000-of-00001.parquet new file mode 100644 index 0000000000000000000000000000000000000000..64031300809ee185c3467935310a66800fa3c6e3 --- /dev/null +++ b/parquet/caucafall/test-00000-of-00001.parquet @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:56f36d14fe193fb9fa001df09afe46f56d232d13864105d863ee585b37eb1d60 +size 4872 diff --git a/parquet/caucafall/train-00000-of-00001.parquet b/parquet/caucafall/train-00000-of-00001.parquet new file mode 100644 index 0000000000000000000000000000000000000000..8844a6c939ae068d8414e9ca41e000cf5c2e1648 --- /dev/null +++ b/parquet/caucafall/train-00000-of-00001.parquet @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9d7165eff7194a7d36abc1f99500202aca8985bd502bbd5f41a9332d1a2cbfb7 +size 6448 diff --git a/parquet/caucafall/validation-00000-of-00001.parquet b/parquet/caucafall/validation-00000-of-00001.parquet new file mode 100644 index 0000000000000000000000000000000000000000..cfef9255ce6647cd08ff039e74bbd144e29f69a1 --- /dev/null +++ b/parquet/caucafall/validation-00000-of-00001.parquet @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:90390ae73ad1c0ddc0ac92914e747f5eac833729bf98d6aba5c213a763c07237 +size 4647 diff --git a/parquet/cmdfall/test-00000-of-00001.parquet b/parquet/cmdfall/test-00000-of-00001.parquet new file mode 100644 index 0000000000000000000000000000000000000000..becf1a6ad9f775d34bb25fca45674156b8de5336 --- /dev/null +++ b/parquet/cmdfall/test-00000-of-00001.parquet @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5c0f657c673bd860e3576eff27d41f2ca5921c72707c12451fb2eea708384ea7 +size 52220 diff --git a/parquet/cmdfall/train-00000-of-00001.parquet b/parquet/cmdfall/train-00000-of-00001.parquet new file mode 100644 index 0000000000000000000000000000000000000000..2864ef59f035214551c12f956689428935e13a51 --- /dev/null +++ b/parquet/cmdfall/train-00000-of-00001.parquet @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6dbaf81afb7d332c7ca6436c4d33b26e608fe864ae1c3ad34990958e52588031 +size 93589 diff --git a/parquet/cmdfall/validation-00000-of-00001.parquet b/parquet/cmdfall/validation-00000-of-00001.parquet new file mode 100644 index 0000000000000000000000000000000000000000..f7b4b47221ba4af10ce3969ce7bcf5a905935687 --- /dev/null +++ b/parquet/cmdfall/validation-00000-of-00001.parquet @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6c070ec7048a682fda4c7425aaccf9a2b4f96ee8e4ca7fe267e5acf6ca6c6712 +size 18969 diff --git a/parquet/cs-staged-wild/test-00000-of-00001.parquet b/parquet/cs-staged-wild/test-00000-of-00001.parquet new file mode 100644 index 0000000000000000000000000000000000000000..e6b6dca16bbcaf06e134d6f3fee2f8dc15d8a6fa --- /dev/null +++ b/parquet/cs-staged-wild/test-00000-of-00001.parquet @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:bd496c163abcb617430940104ee715f6acb5ed6dd2aea7af34b9f3e057bb56e7 +size 46279 diff --git a/parquet/cs-staged-wild/train-00000-of-00001.parquet b/parquet/cs-staged-wild/train-00000-of-00001.parquet new file mode 100644 index 0000000000000000000000000000000000000000..d5f1947482443c3249112b94d444e1e9b69d9ba0 --- /dev/null +++ b/parquet/cs-staged-wild/train-00000-of-00001.parquet @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d60e214dbb15b7bdaa976e03a5443168d0fe6d0f36d92260adfbb330268ff717 +size 157509 diff --git a/parquet/cs-staged-wild/validation-00000-of-00001.parquet b/parquet/cs-staged-wild/validation-00000-of-00001.parquet new file mode 100644 index 0000000000000000000000000000000000000000..b89d7fd433628fed9897bef6deb7d1cea464cb25 --- /dev/null +++ b/parquet/cs-staged-wild/validation-00000-of-00001.parquet @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9c8d927168548653d2fc2198cd662dfc4919316f3e20bc713b52c2cd751f250d +size 24321 diff --git a/parquet/cs-staged/test-00000-of-00001.parquet b/parquet/cs-staged/test-00000-of-00001.parquet new file mode 100644 index 0000000000000000000000000000000000000000..ae2c6ca36522775b4858991e04d20fc458317854 --- /dev/null +++ b/parquet/cs-staged/test-00000-of-00001.parquet @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:de99a15c53a7d10801b61ea4301a6452d2e94d9f74336d0cf716016d0a420491 +size 90482 diff --git a/parquet/cs-staged/train-00000-of-00001.parquet b/parquet/cs-staged/train-00000-of-00001.parquet new file mode 100644 index 0000000000000000000000000000000000000000..d5f1947482443c3249112b94d444e1e9b69d9ba0 --- /dev/null +++ b/parquet/cs-staged/train-00000-of-00001.parquet @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d60e214dbb15b7bdaa976e03a5443168d0fe6d0f36d92260adfbb330268ff717 +size 157509 diff --git a/parquet/cs-staged/validation-00000-of-00001.parquet b/parquet/cs-staged/validation-00000-of-00001.parquet new file mode 100644 index 0000000000000000000000000000000000000000..b89d7fd433628fed9897bef6deb7d1cea464cb25 --- /dev/null +++ b/parquet/cs-staged/validation-00000-of-00001.parquet @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9c8d927168548653d2fc2198cd662dfc4919316f3e20bc713b52c2cd751f250d +size 24321 diff --git a/parquet/cs/test-00000-of-00001.parquet b/parquet/cs/test-00000-of-00001.parquet new file mode 100644 index 0000000000000000000000000000000000000000..2642cac0f221e9d8e2b14e1c9adcd2538895f10d --- /dev/null +++ b/parquet/cs/test-00000-of-00001.parquet @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0783406d42caea66d7d75f75e0a3108d2b17523aaee7bc962590a54cbce6a786 +size 139074 diff --git a/parquet/cs/train-00000-of-00001.parquet b/parquet/cs/train-00000-of-00001.parquet new file mode 100644 index 0000000000000000000000000000000000000000..8f572558aea8c61026653711fb080073aeb333d7 --- /dev/null +++ b/parquet/cs/train-00000-of-00001.parquet @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0fa18b6a07176eafdc27765f8a3d1a16f3f07333f3f25e78b4613f9fd1a76e17 +size 171245 diff --git a/parquet/cs/validation-00000-of-00001.parquet b/parquet/cs/validation-00000-of-00001.parquet new file mode 100644 index 0000000000000000000000000000000000000000..d49ac991f910224fa77152a59cc793da823c13e2 --- /dev/null +++ b/parquet/cs/validation-00000-of-00001.parquet @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:529bddf37e746efddf77068ee9a577d8d2983754e432c9ca5b0df78bdbda9cc2 +size 33666 diff --git a/parquet/cv-staged-wild/test-00000-of-00001.parquet b/parquet/cv-staged-wild/test-00000-of-00001.parquet new file mode 100644 index 0000000000000000000000000000000000000000..e6b6dca16bbcaf06e134d6f3fee2f8dc15d8a6fa --- /dev/null +++ b/parquet/cv-staged-wild/test-00000-of-00001.parquet @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:bd496c163abcb617430940104ee715f6acb5ed6dd2aea7af34b9f3e057bb56e7 +size 46279 diff --git a/parquet/cv-staged-wild/train-00000-of-00001.parquet b/parquet/cv-staged-wild/train-00000-of-00001.parquet new file mode 100644 index 0000000000000000000000000000000000000000..290245b9feedc417ed397421954cd1edd95c6ffd --- /dev/null +++ b/parquet/cv-staged-wild/train-00000-of-00001.parquet @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9b3355d27e4e9665756b3da472ea080192e681e595a1f318dda6739bf1a4ff01 +size 113467 diff --git a/parquet/cv-staged-wild/validation-00000-of-00001.parquet b/parquet/cv-staged-wild/validation-00000-of-00001.parquet new file mode 100644 index 0000000000000000000000000000000000000000..41eb5cf4271a763cf6f294bdb05154f44b5d6ea3 --- /dev/null +++ b/parquet/cv-staged-wild/validation-00000-of-00001.parquet @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d47789598ae465a3f9284760d17eb9073d8f94ec667280c5e22830cad463f88b +size 103804 diff --git a/parquet/cv-staged/test-00000-of-00001.parquet b/parquet/cv-staged/test-00000-of-00001.parquet new file mode 100644 index 0000000000000000000000000000000000000000..f430d2f547924110ed4b27c5cc3e6626fc9df6e4 --- /dev/null +++ b/parquet/cv-staged/test-00000-of-00001.parquet @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:00d48d217da8eb98a041fd0a86e9b51703c803dd5c3cc35bad625a8d0ee14c26 +size 180622 diff --git a/parquet/cv-staged/train-00000-of-00001.parquet b/parquet/cv-staged/train-00000-of-00001.parquet new file mode 100644 index 0000000000000000000000000000000000000000..290245b9feedc417ed397421954cd1edd95c6ffd --- /dev/null +++ b/parquet/cv-staged/train-00000-of-00001.parquet @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9b3355d27e4e9665756b3da472ea080192e681e595a1f318dda6739bf1a4ff01 +size 113467 diff --git a/parquet/cv-staged/validation-00000-of-00001.parquet b/parquet/cv-staged/validation-00000-of-00001.parquet new file mode 100644 index 0000000000000000000000000000000000000000..41eb5cf4271a763cf6f294bdb05154f44b5d6ea3 --- /dev/null +++ b/parquet/cv-staged/validation-00000-of-00001.parquet @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d47789598ae465a3f9284760d17eb9073d8f94ec667280c5e22830cad463f88b +size 103804 diff --git a/parquet/cv/test-00000-of-00001.parquet b/parquet/cv/test-00000-of-00001.parquet new file mode 100644 index 0000000000000000000000000000000000000000..b417ca26aa9df2d74ee75b9a4d79785ac048ab79 --- /dev/null +++ b/parquet/cv/test-00000-of-00001.parquet @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c1ddcfb78f6334bd2c59f58b9dcf6d22b4ea94223d50240f82d258a8129ace1a +size 225925 diff --git a/parquet/cv/train-00000-of-00001.parquet b/parquet/cv/train-00000-of-00001.parquet new file mode 100644 index 0000000000000000000000000000000000000000..7472805ba2476f6a00929d996504ea19121c2af7 --- /dev/null +++ b/parquet/cv/train-00000-of-00001.parquet @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b6829948633305c80ee633ae8240842670354685633e2a8e8f4a2fed05e32797 +size 127828 diff --git a/parquet/cv/validation-00000-of-00001.parquet b/parquet/cv/validation-00000-of-00001.parquet new file mode 100644 index 0000000000000000000000000000000000000000..cfefe05f0ff0386bf8c3833ff4c8c8d51039b422 --- /dev/null +++ b/parquet/cv/validation-00000-of-00001.parquet @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d4880d7837978e7fee9981de3bf657e32bb4857014f03a579cd0e73423902819 +size 111988 diff --git a/parquet/edf/test-00000-of-00001.parquet b/parquet/edf/test-00000-of-00001.parquet new file mode 100644 index 0000000000000000000000000000000000000000..a2d6e303a3e7b9b5fa917210e5c30f61472ba24a --- /dev/null +++ b/parquet/edf/test-00000-of-00001.parquet @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c322cdf427680f6840e61d0dce9e011867f6be5dded25fc5898abecad86534b4 +size 5693 diff --git a/parquet/edf/train-00000-of-00001.parquet b/parquet/edf/train-00000-of-00001.parquet new file mode 100644 index 0000000000000000000000000000000000000000..fa64f14522fe112b80c94ecca64692cc4596d368 --- /dev/null +++ b/parquet/edf/train-00000-of-00001.parquet @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e2839d6079515f3a3ac5acf5072df97a94ea1c04590b2b8027db2a24b9d95545 +size 7704 diff --git a/parquet/edf/validation-00000-of-00001.parquet b/parquet/edf/validation-00000-of-00001.parquet new file mode 100644 index 0000000000000000000000000000000000000000..12c3242bca9b95e353c9814473aa9e09be7e2cb5 --- /dev/null +++ b/parquet/edf/validation-00000-of-00001.parquet @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b8c2698f76a52a889c4ca02cb9ac59a0d42af6269db183a0f0026947ee4500f4 +size 5232 diff --git a/parquet/gmdcsa24/test-00000-of-00001.parquet b/parquet/gmdcsa24/test-00000-of-00001.parquet new file mode 100644 index 0000000000000000000000000000000000000000..28e66e8b2a76d854c2e9eaec51a394b2728002b5 --- /dev/null +++ b/parquet/gmdcsa24/test-00000-of-00001.parquet @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d13a0250ad70fdfad33fedb2f6d77add15f0bf286fa3e502938de9b3883031fc +size 5409 diff --git a/parquet/gmdcsa24/train-00000-of-00001.parquet b/parquet/gmdcsa24/train-00000-of-00001.parquet new file mode 100644 index 0000000000000000000000000000000000000000..bb46e72ffe230df95778c0062a4d7847c934b976 --- /dev/null +++ b/parquet/gmdcsa24/train-00000-of-00001.parquet @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6339d18286b632e7cc9764497c732088e20348674c36d129d33cb29fb3412622 +size 6747 diff --git a/parquet/gmdcsa24/validation-00000-of-00001.parquet b/parquet/gmdcsa24/validation-00000-of-00001.parquet new file mode 100644 index 0000000000000000000000000000000000000000..a63a2c2602e91d9bfc4a685495e3cd8293920859 --- /dev/null +++ b/parquet/gmdcsa24/validation-00000-of-00001.parquet @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4c5a6d927e436f2446c279c2e098de1ad3dbfe0646f3d1e5808f988ea58ea968 +size 5982 diff --git a/parquet/labels-syn/train-00000-of-00001.parquet b/parquet/labels-syn/train-00000-of-00001.parquet new file mode 100644 index 0000000000000000000000000000000000000000..b835dc5cc0fa93e936020f46e99b9d6c4414283b --- /dev/null +++ b/parquet/labels-syn/train-00000-of-00001.parquet @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:331c7543e7131935b7d5211dc820543d20bad2fb205bf00702fdb79021c12cac +size 225449 diff --git a/parquet/labels/train-00000-of-00001.parquet b/parquet/labels/train-00000-of-00001.parquet new file mode 100644 index 0000000000000000000000000000000000000000..c2ab9d2898a22004f1d29dfa55ee209734b3e7fb --- /dev/null +++ b/parquet/labels/train-00000-of-00001.parquet @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a5169d3e95b26080527265516d415d068a83c3dea4cddca8d0828a8d2345fd3a +size 309792 diff --git a/parquet/le2i/test-00000-of-00001.parquet b/parquet/le2i/test-00000-of-00001.parquet new file mode 100644 index 0000000000000000000000000000000000000000..7a43c4747f88aee7b84f22d761127b5712b63b6a --- /dev/null +++ b/parquet/le2i/test-00000-of-00001.parquet @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9c0644ac5bdfa306db7211ba110f99c6bbe3c07074a5c89159c6ef24338a4bba +size 6708 diff --git a/parquet/le2i/train-00000-of-00001.parquet b/parquet/le2i/train-00000-of-00001.parquet new file mode 100644 index 0000000000000000000000000000000000000000..0bbe5c9c27ec2133a6200c627ae93f0607fb81f3 --- /dev/null +++ b/parquet/le2i/train-00000-of-00001.parquet @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:736ab4c0f5715f5ad6e319582b04a46bcdc32c4f8377c8cd9e8442aadd9e97e0 +size 11895 diff --git a/parquet/le2i/validation-00000-of-00001.parquet b/parquet/le2i/validation-00000-of-00001.parquet new file mode 100644 index 0000000000000000000000000000000000000000..3e595391899cd44aaff2bd791c68f6aaf8da9af8 --- /dev/null +++ b/parquet/le2i/validation-00000-of-00001.parquet @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7d2a340314d8d07d0ae6fcdcbeee69d5344b11bc0942c22e0dd2ab969f55dc39 +size 5360 diff --git a/parquet/mcfd/train-00000-of-00001.parquet b/parquet/mcfd/train-00000-of-00001.parquet new file mode 100644 index 0000000000000000000000000000000000000000..384d819ecaea09529c7e80197749f75167f17592 --- /dev/null +++ b/parquet/mcfd/train-00000-of-00001.parquet @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:87d258476d2d3280dcac960f7c9ee6b3e29210be2842600457ec448305ffdfed +size 18819 diff --git a/parquet/metadata-syn/train-00000-of-00001.parquet b/parquet/metadata-syn/train-00000-of-00001.parquet new file mode 100644 index 0000000000000000000000000000000000000000..605e9375dea33087591e5358b17e10fda438ae0d --- /dev/null +++ b/parquet/metadata-syn/train-00000-of-00001.parquet @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e3e4cd9cf3ac23a087f17621d129f8910fccbfdf81fe965cb415bc6075639175 +size 112932 diff --git a/parquet/occu/test-00000-of-00001.parquet b/parquet/occu/test-00000-of-00001.parquet new file mode 100644 index 0000000000000000000000000000000000000000..a25312121b9e5d7895c103da5090d27be8688a13 --- /dev/null +++ b/parquet/occu/test-00000-of-00001.parquet @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c177a20d5c6c78bda80ffc34bf181311f52dc47905786088f02a7627ea708094 +size 5450 diff --git a/parquet/occu/train-00000-of-00001.parquet b/parquet/occu/train-00000-of-00001.parquet new file mode 100644 index 0000000000000000000000000000000000000000..f06d34ed38140998890bc27472febcd244b24722 --- /dev/null +++ b/parquet/occu/train-00000-of-00001.parquet @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f62589b60ee5096c0daabeef98614722ad7d06e78e8fd18cc3dd0091e198c977 +size 7538 diff --git a/parquet/occu/validation-00000-of-00001.parquet b/parquet/occu/validation-00000-of-00001.parquet new file mode 100644 index 0000000000000000000000000000000000000000..92973b5c3eaa33105e98420e049d5eba22e14118 --- /dev/null +++ b/parquet/occu/validation-00000-of-00001.parquet @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:fe8e026e7699b323385583f855f846811bc805dc86775f9a6051444f3f639eeb +size 5365 diff --git a/parquet/of-itw/test-00000-of-00001.parquet b/parquet/of-itw/test-00000-of-00001.parquet new file mode 100644 index 0000000000000000000000000000000000000000..e6b6dca16bbcaf06e134d6f3fee2f8dc15d8a6fa --- /dev/null +++ b/parquet/of-itw/test-00000-of-00001.parquet @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:bd496c163abcb617430940104ee715f6acb5ed6dd2aea7af34b9f3e057bb56e7 +size 46279 diff --git a/parquet/of-itw/train-00000-of-00001.parquet b/parquet/of-itw/train-00000-of-00001.parquet new file mode 100644 index 0000000000000000000000000000000000000000..7aba2d5963ef13d5b9f4ad002ab149ecb802777b --- /dev/null +++ b/parquet/of-itw/train-00000-of-00001.parquet @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:439de7cca631ff21f27fa266407b0d7912a73c91e4d777fcc27f02690d65aa2c +size 17402 diff --git a/parquet/of-itw/validation-00000-of-00001.parquet b/parquet/of-itw/validation-00000-of-00001.parquet new file mode 100644 index 0000000000000000000000000000000000000000..6a03a653de257b228b0bac029ed971043f361aa6 --- /dev/null +++ b/parquet/of-itw/validation-00000-of-00001.parquet @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:cf764c5161cff04ad4183ecbc344c202bee4bd7d0bf4f6e1d019e78be843d204 +size 11006 diff --git a/parquet/of-sta-cs/test-00000-of-00001.parquet b/parquet/of-sta-cs/test-00000-of-00001.parquet new file mode 100644 index 0000000000000000000000000000000000000000..ae2c6ca36522775b4858991e04d20fc458317854 --- /dev/null +++ b/parquet/of-sta-cs/test-00000-of-00001.parquet @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:de99a15c53a7d10801b61ea4301a6452d2e94d9f74336d0cf716016d0a420491 +size 90482 diff --git a/parquet/of-sta-cs/train-00000-of-00001.parquet b/parquet/of-sta-cs/train-00000-of-00001.parquet new file mode 100644 index 0000000000000000000000000000000000000000..d5f1947482443c3249112b94d444e1e9b69d9ba0 --- /dev/null +++ b/parquet/of-sta-cs/train-00000-of-00001.parquet @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d60e214dbb15b7bdaa976e03a5443168d0fe6d0f36d92260adfbb330268ff717 +size 157509 diff --git a/parquet/of-sta-cs/validation-00000-of-00001.parquet b/parquet/of-sta-cs/validation-00000-of-00001.parquet new file mode 100644 index 0000000000000000000000000000000000000000..b89d7fd433628fed9897bef6deb7d1cea464cb25 --- /dev/null +++ b/parquet/of-sta-cs/validation-00000-of-00001.parquet @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9c8d927168548653d2fc2198cd662dfc4919316f3e20bc713b52c2cd751f250d +size 24321 diff --git a/parquet/of-sta-cv/test-00000-of-00001.parquet b/parquet/of-sta-cv/test-00000-of-00001.parquet new file mode 100644 index 0000000000000000000000000000000000000000..f430d2f547924110ed4b27c5cc3e6626fc9df6e4 --- /dev/null +++ b/parquet/of-sta-cv/test-00000-of-00001.parquet @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:00d48d217da8eb98a041fd0a86e9b51703c803dd5c3cc35bad625a8d0ee14c26 +size 180622 diff --git a/parquet/of-sta-cv/train-00000-of-00001.parquet b/parquet/of-sta-cv/train-00000-of-00001.parquet new file mode 100644 index 0000000000000000000000000000000000000000..290245b9feedc417ed397421954cd1edd95c6ffd --- /dev/null +++ b/parquet/of-sta-cv/train-00000-of-00001.parquet @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9b3355d27e4e9665756b3da472ea080192e681e595a1f318dda6739bf1a4ff01 +size 113467 diff --git a/parquet/of-sta-cv/validation-00000-of-00001.parquet b/parquet/of-sta-cv/validation-00000-of-00001.parquet new file mode 100644 index 0000000000000000000000000000000000000000..41eb5cf4271a763cf6f294bdb05154f44b5d6ea3 --- /dev/null +++ b/parquet/of-sta-cv/validation-00000-of-00001.parquet @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d47789598ae465a3f9284760d17eb9073d8f94ec667280c5e22830cad463f88b +size 103804 diff --git a/parquet/of-sta-itw-cs/test-00000-of-00001.parquet b/parquet/of-sta-itw-cs/test-00000-of-00001.parquet new file mode 100644 index 0000000000000000000000000000000000000000..e6b6dca16bbcaf06e134d6f3fee2f8dc15d8a6fa --- /dev/null +++ b/parquet/of-sta-itw-cs/test-00000-of-00001.parquet @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:bd496c163abcb617430940104ee715f6acb5ed6dd2aea7af34b9f3e057bb56e7 +size 46279 diff --git a/parquet/of-sta-itw-cs/train-00000-of-00001.parquet b/parquet/of-sta-itw-cs/train-00000-of-00001.parquet new file mode 100644 index 0000000000000000000000000000000000000000..d5f1947482443c3249112b94d444e1e9b69d9ba0 --- /dev/null +++ b/parquet/of-sta-itw-cs/train-00000-of-00001.parquet @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d60e214dbb15b7bdaa976e03a5443168d0fe6d0f36d92260adfbb330268ff717 +size 157509 diff --git a/parquet/of-sta-itw-cs/validation-00000-of-00001.parquet b/parquet/of-sta-itw-cs/validation-00000-of-00001.parquet new file mode 100644 index 0000000000000000000000000000000000000000..b89d7fd433628fed9897bef6deb7d1cea464cb25 --- /dev/null +++ b/parquet/of-sta-itw-cs/validation-00000-of-00001.parquet @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9c8d927168548653d2fc2198cd662dfc4919316f3e20bc713b52c2cd751f250d +size 24321 diff --git a/parquet/of-sta-itw-cv/test-00000-of-00001.parquet b/parquet/of-sta-itw-cv/test-00000-of-00001.parquet new file mode 100644 index 0000000000000000000000000000000000000000..e6b6dca16bbcaf06e134d6f3fee2f8dc15d8a6fa --- /dev/null +++ b/parquet/of-sta-itw-cv/test-00000-of-00001.parquet @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:bd496c163abcb617430940104ee715f6acb5ed6dd2aea7af34b9f3e057bb56e7 +size 46279 diff --git a/parquet/of-sta-itw-cv/train-00000-of-00001.parquet b/parquet/of-sta-itw-cv/train-00000-of-00001.parquet new file mode 100644 index 0000000000000000000000000000000000000000..290245b9feedc417ed397421954cd1edd95c6ffd --- /dev/null +++ b/parquet/of-sta-itw-cv/train-00000-of-00001.parquet @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9b3355d27e4e9665756b3da472ea080192e681e595a1f318dda6739bf1a4ff01 +size 113467 diff --git a/parquet/of-sta-itw-cv/validation-00000-of-00001.parquet b/parquet/of-sta-itw-cv/validation-00000-of-00001.parquet new file mode 100644 index 0000000000000000000000000000000000000000..41eb5cf4271a763cf6f294bdb05154f44b5d6ea3 --- /dev/null +++ b/parquet/of-sta-itw-cv/validation-00000-of-00001.parquet @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d47789598ae465a3f9284760d17eb9073d8f94ec667280c5e22830cad463f88b +size 103804 diff --git a/parquet/of-syn-cross-age/test-00000-of-00001.parquet b/parquet/of-syn-cross-age/test-00000-of-00001.parquet new file mode 100644 index 0000000000000000000000000000000000000000..e3c1bf32da6d3cb11ee0466d94c08ce7b2d5b4b6 --- /dev/null +++ b/parquet/of-syn-cross-age/test-00000-of-00001.parquet @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d3a07c0f4193496424c9d94f7bfd907e5754e4c526ef07fc5558f4a6aba70d48 +size 124369 diff --git a/parquet/of-syn-cross-age/train-00000-of-00001.parquet b/parquet/of-syn-cross-age/train-00000-of-00001.parquet new file mode 100644 index 0000000000000000000000000000000000000000..ab3e450133bbde50fe530a52e4eb20828ba255ce --- /dev/null +++ b/parquet/of-syn-cross-age/train-00000-of-00001.parquet @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6b18816b0c2c45c73164875ba7187bd83206e9a86c2200ac8675409e6d84f3d0 +size 86210 diff --git a/parquet/of-syn-cross-age/validation-00000-of-00001.parquet b/parquet/of-syn-cross-age/validation-00000-of-00001.parquet new file mode 100644 index 0000000000000000000000000000000000000000..f3841fc53f3381749c9d36eede3b4146e25549c2 --- /dev/null +++ b/parquet/of-syn-cross-age/validation-00000-of-00001.parquet @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d7cb9cc4fc72e1703cda66c06e4ba6316714110790cf8a64fe61fba589ef4c53 +size 49919 diff --git a/parquet/of-syn-cross-bmi/test-00000-of-00001.parquet b/parquet/of-syn-cross-bmi/test-00000-of-00001.parquet new file mode 100644 index 0000000000000000000000000000000000000000..cf3d0a755bda458887dfe21b640c67703df9beb6 --- /dev/null +++ b/parquet/of-syn-cross-bmi/test-00000-of-00001.parquet @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:bb49676429ed00b48eeba71438b40621ae6bbbfb9b92bc7c4e30866f6e83e735 +size 69680 diff --git a/parquet/of-syn-cross-bmi/train-00000-of-00001.parquet b/parquet/of-syn-cross-bmi/train-00000-of-00001.parquet new file mode 100644 index 0000000000000000000000000000000000000000..2c48208f46018ccbce5e42409561a60027cf8b1a --- /dev/null +++ b/parquet/of-syn-cross-bmi/train-00000-of-00001.parquet @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0915a46941794b047c3f3e5b6bba263a01ccd4064f89fed9fcf1ec7429db8625 +size 124091 diff --git a/parquet/of-syn-cross-bmi/validation-00000-of-00001.parquet b/parquet/of-syn-cross-bmi/validation-00000-of-00001.parquet new file mode 100644 index 0000000000000000000000000000000000000000..00574d88bafe1a2377ea1aafa6860ab9fa94e888 --- /dev/null +++ b/parquet/of-syn-cross-bmi/validation-00000-of-00001.parquet @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:69cf659abb810e5e3c74abd4723449fb5693c5476f83f564e2df4c9c7c7a0e13 +size 67681 diff --git a/parquet/of-syn-cross-ethnicity/test-00000-of-00001.parquet b/parquet/of-syn-cross-ethnicity/test-00000-of-00001.parquet new file mode 100644 index 0000000000000000000000000000000000000000..8257b043af63b5cae35ed1783b452781860f6916 --- /dev/null +++ b/parquet/of-syn-cross-ethnicity/test-00000-of-00001.parquet @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ec7d1b8359438d52c84fbc61102c8fd26cf1b12e6127a3bab543392a9d3fb45a +size 108729 diff --git a/parquet/of-syn-cross-ethnicity/train-00000-of-00001.parquet b/parquet/of-syn-cross-ethnicity/train-00000-of-00001.parquet new file mode 100644 index 0000000000000000000000000000000000000000..085afb56e152d070b7390329ab0f03039ce702c8 --- /dev/null +++ b/parquet/of-syn-cross-ethnicity/train-00000-of-00001.parquet @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f2008bd17c104a54a8d1001d82f80a27215a916016c92ec143a7215ec4f238d4 +size 108831 diff --git a/parquet/of-syn-cross-ethnicity/validation-00000-of-00001.parquet b/parquet/of-syn-cross-ethnicity/validation-00000-of-00001.parquet new file mode 100644 index 0000000000000000000000000000000000000000..51ab310598779c49e7a33f588088373cf239542b --- /dev/null +++ b/parquet/of-syn-cross-ethnicity/validation-00000-of-00001.parquet @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4db06c1d9e1841d7603df01971cfd0b3b736bd761fbd2d26f561eee6c5d47bbb +size 44663 diff --git a/parquet/of-syn-itw/test-00000-of-00001.parquet b/parquet/of-syn-itw/test-00000-of-00001.parquet new file mode 100644 index 0000000000000000000000000000000000000000..e6b6dca16bbcaf06e134d6f3fee2f8dc15d8a6fa --- /dev/null +++ b/parquet/of-syn-itw/test-00000-of-00001.parquet @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:bd496c163abcb617430940104ee715f6acb5ed6dd2aea7af34b9f3e057bb56e7 +size 46279 diff --git a/parquet/of-syn-itw/train-00000-of-00001.parquet b/parquet/of-syn-itw/train-00000-of-00001.parquet new file mode 100644 index 0000000000000000000000000000000000000000..826390a0d84450df7e213b8092e116437c8f58f4 --- /dev/null +++ b/parquet/of-syn-itw/train-00000-of-00001.parquet @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e41548044fdd1f8c9a9f37e125f06ab1932e1e2d36ac046a7994d029bc8c832b +size 137101 diff --git a/parquet/of-syn-itw/validation-00000-of-00001.parquet b/parquet/of-syn-itw/validation-00000-of-00001.parquet new file mode 100644 index 0000000000000000000000000000000000000000..de4de40d15c2adfe5ed7f94a7b124954b1e07674 --- /dev/null +++ b/parquet/of-syn-itw/validation-00000-of-00001.parquet @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:cf6d58dba4eebf3e134cb0539253a9666f1dacfe545489286939d4c678f9be2e +size 23574 diff --git a/parquet/of-syn/test-00000-of-00001.parquet b/parquet/of-syn/test-00000-of-00001.parquet new file mode 100644 index 0000000000000000000000000000000000000000..696d9d189b5c536d3e41e602dc215c3bec42f9df --- /dev/null +++ b/parquet/of-syn/test-00000-of-00001.parquet @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7e63831d7e1c258febb33d3e8c443210186bc4a1677e8baafdeb3ea051ee800d +size 36114 diff --git a/parquet/of-syn/train-00000-of-00001.parquet b/parquet/of-syn/train-00000-of-00001.parquet new file mode 100644 index 0000000000000000000000000000000000000000..90c7bcde2721d0ff99d3a14166057eb5571d1d83 --- /dev/null +++ b/parquet/of-syn/train-00000-of-00001.parquet @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3a5e7142ce6c83b3dd2d2be57f1c7d0f5473ac02b9074e9966a1761369a4c75a +size 186348 diff --git a/parquet/of-syn/validation-00000-of-00001.parquet b/parquet/of-syn/validation-00000-of-00001.parquet new file mode 100644 index 0000000000000000000000000000000000000000..7349f0b8409392d61b23214f17dbcd95379f9a49 --- /dev/null +++ b/parquet/of-syn/validation-00000-of-00001.parquet @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d20c1988a99f691a40c27788de396811cc73ff31dceb7cdb008ce455b07df075 +size 36310 diff --git a/parquet/up_fall/test-00000-of-00001.parquet b/parquet/up_fall/test-00000-of-00001.parquet new file mode 100644 index 0000000000000000000000000000000000000000..2afdd8a9727509fed1464bb67dea9ba6617adfc2 --- /dev/null +++ b/parquet/up_fall/test-00000-of-00001.parquet @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0607105b0450318fd528e562dd230cfb06a5ac6e41bc2aad6a7449f3a30a5e7c +size 9987 diff --git a/parquet/up_fall/train-00000-of-00001.parquet b/parquet/up_fall/train-00000-of-00001.parquet new file mode 100644 index 0000000000000000000000000000000000000000..2668e9e4ff9f3d2c3062a799ab615fb3fe672547 --- /dev/null +++ b/parquet/up_fall/train-00000-of-00001.parquet @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6b736b1ab8e8bb52f1fe729fa93e28bcff8104d07eb35d7f51acccdffea2bcf2 +size 27303 diff --git a/parquet/up_fall/validation-00000-of-00001.parquet b/parquet/up_fall/validation-00000-of-00001.parquet new file mode 100644 index 0000000000000000000000000000000000000000..8c06af6c4b45a27c20a77c8b3623fe0e08c086e9 --- /dev/null +++ b/parquet/up_fall/validation-00000-of-00001.parquet @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7c6c386a919e3b89ba438d848ca75c9b1a3436735897a8afebae8bd954b74ccd +size 5870