Ishwar Balappanawar commited on
Commit
56b1ab6
·
1 Parent(s): 2d9c359

Switch to data-files viewer workflow

Browse files
README.md CHANGED
@@ -1,21 +1,64 @@
1
  ---
2
  annotations_creators:
3
- - expert-generated
4
  language_creators:
5
- - other
6
  language: en
7
  license: cc-by-4.0
8
  multilinguality:
9
- - monolingual
10
  size_categories:
11
- - 1K<n<10K
12
  source_datasets:
13
- - combination
14
  task_categories:
15
- - other
16
  task_ids:
17
- - multi-label-classification
18
  pretty_name: CUEBench
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
19
  ---
20
 
21
  # CUEBench: Contextual Unobserved Entity Benchmark
@@ -33,8 +76,8 @@ CUEBench is a neurosymbolic benchmark that emphasizes **contextual entity predic
33
 
34
  | Config | File | Description |
35
  | --- | --- | --- |
36
- | `clue` *(default)* | `clue_metadata.jsonl` | Contextual Unobserved Entity (CLUE) frames with heavy occlusions and single-target predictions. |
37
- | `mep` | `mep_metadata.jsonl` | Multi-Entity Prediction (MEP) split that introduces complementary metadata and more diverse target sets. |
38
 
39
  When this dataset is viewed on Hugging Face, the dataset viewer automatically exposes a **config dropdown** so you can switch between `clue` and `mep` without leaving the UI.
40
 
@@ -82,13 +125,27 @@ dataset = load_dataset(
82
  from datasets import load_dataset
83
 
84
  dataset = load_dataset(
85
- path="cuebench",
86
- data_files="clue_metadata.jsonl", # swap with "mep_metadata.jsonl" as needed
87
  split="train",
88
  )
89
  ```
90
 
91
- > **Tip:** From source, you can still switch configurations by pointing `data_files` to `clue_metadata.jsonl` or `mep_metadata.jsonl`.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
92
 
93
  ## Metrics
94
 
@@ -96,7 +153,7 @@ dataset = load_dataset(
96
 
97
  ## Licensing
98
 
99
- The dataset is currently tagged as **CC-BY-4.0** in `cuebench.py`. Update this section if you select a different license.
100
 
101
  ## Citation
102
 
@@ -116,10 +173,15 @@ The dataset is currently tagged as **CC-BY-4.0** in `cuebench.py`. Update this s
116
  ```
117
  cuebench/
118
  README.md
119
- cuebench.py
120
- clue_metadata.jsonl
121
- mep_metadata.jsonl
 
 
 
122
  metric.py # optional metric script
 
 
123
  images/... # optional or host separately
124
  ```
125
  4. Initialize Git + LFS:
@@ -133,11 +195,9 @@ The dataset is currently tagged as **CC-BY-4.0** in `cuebench.py`. Update this s
133
  git commit -m "Initial CUEBench dataset"
134
  git push origin main
135
  ```
136
- 5. Validate locally before pushing updates (optional but recommended):
137
- - `datasets-cli test ./cuebench.py --all_configs`
138
- - `python -m datasets.prepare_module ./cuebench.py`
139
- 6. On the Hub page, trigger the dataset preview to ensure the loader runs.
140
- 7. (Optional) Publish the metric under `metrics/cuebench-metric` following the Metrics Hub template and link it from the dataset card.
141
 
142
  Update these steps with any organization-specific tooling you use.
143
-
 
1
  ---
2
  annotations_creators:
3
+ - expert-generated
4
  language_creators:
5
+ - other
6
  language: en
7
  license: cc-by-4.0
8
  multilinguality:
9
+ - monolingual
10
  size_categories:
11
+ - 1K<n<10K
12
  source_datasets:
13
+ - combination
14
  task_categories:
15
+ - other
16
  task_ids:
17
+ - multi-label-classification
18
  pretty_name: CUEBench
19
+ configs:
20
+ - config_name: clue
21
+ default: true
22
+ data_files:
23
+ - split: train
24
+ path: data/clue/train.jsonl
25
+ - config_name: mep
26
+ data_files:
27
+ - split: train
28
+ path: data/mep/train.jsonl
29
+ dataset_info:
30
+ - config_name: clue
31
+ features:
32
+ - name: image_id
33
+ dtype: string
34
+ - name: observed_classes
35
+ sequence: string
36
+ - name: target_classes
37
+ sequence: string
38
+ - name: image_path
39
+ dtype: string
40
+ splits:
41
+ - name: train
42
+ num_bytes: 1101143
43
+ num_examples: 1648
44
+ download_size: 1101143
45
+ dataset_size: 1101143
46
+ - config_name: mep
47
+ features:
48
+ - name: image_id
49
+ dtype: string
50
+ - name: observed_classes
51
+ sequence: string
52
+ - name: target_classes
53
+ sequence: string
54
+ - name: image_path
55
+ dtype: string
56
+ splits:
57
+ - name: train
58
+ num_bytes: 845579
59
+ num_examples: 1216
60
+ download_size: 845579
61
+ dataset_size: 845579
62
  ---
63
 
64
  # CUEBench: Contextual Unobserved Entity Benchmark
 
76
 
77
  | Config | File | Description |
78
  | --- | --- | --- |
79
+ | `clue` *(default)* | `data/clue/train.jsonl` | Contextual Unobserved Entity (CLUE) frames with heavy occlusions and single-target predictions. |
80
+ | `mep` | `data/mep/train.jsonl` | Multi-Entity Prediction (MEP) split that introduces complementary metadata and more diverse target sets. |
81
 
82
  When this dataset is viewed on Hugging Face, the dataset viewer automatically exposes a **config dropdown** so you can switch between `clue` and `mep` without leaving the UI.
83
 
 
125
  from datasets import load_dataset
126
 
127
  dataset = load_dataset(
128
+ path="json",
129
+ data_files={"train": "data/clue/train.jsonl"}, # swap with data/mep/train.jsonl
130
  split="train",
131
  )
132
  ```
133
 
134
+ > **Tip:** From source, you can still switch configurations by pointing `data_files` to `data/mep/train.jsonl`.
135
+
136
+ ### Regenerating viewer files
137
+
138
+ The repository keeps the original metadata dumps under `raw/`. To refresh the
139
+ viewer-friendly JSONL files (e.g. after updating the raw annotations), run:
140
+
141
+ ```bash
142
+ /.venv/bin/python scripts/build_viewer_files.py
143
+ ```
144
+
145
+ This script adds the derived columns (`image_id`, `observed_classes`, etc.) and
146
+ drops the converted files into `data/clue/train.jsonl` and
147
+ `data/mep/train.jsonl`. It also updates `data/stats.json`, which is referenced by
148
+ the dataset card to keep `dataset_info` counters accurate.
149
 
150
  ## Metrics
151
 
 
153
 
154
  ## Licensing
155
 
156
+ The dataset is currently tagged as **CC-BY-4.0**. Update this section if you select a different license.
157
 
158
  ## Citation
159
 
 
173
  ```
174
  cuebench/
175
  README.md
176
+ data/
177
+ clue/train.jsonl
178
+ mep/train.jsonl
179
+ raw/
180
+ clue_metadata.jsonl
181
+ mep_metadata.jsonl
182
  metric.py # optional metric script
183
+ scripts/build_viewer_files.py
184
+ scripts/push_to_hub.py
185
  images/... # optional or host separately
186
  ```
187
  4. Initialize Git + LFS:
 
195
  git commit -m "Initial CUEBench dataset"
196
  git push origin main
197
  ```
198
+ 5. Regenerate viewer files anytime the raw metadata changes: `/.venv/bin/python scripts/build_viewer_files.py`
199
+ 6. Push the prepared splits to the Hub (per config) using `/.venv/bin/python scripts/push_to_hub.py --repo ishwarbb23/cuebench`
200
+ 7. On the Hub page, trigger the dataset preview to ensure the loader runs.
201
+ 8. (Optional) Publish the metric under `metrics/cuebench-metric` following the Metrics Hub template and link it from the dataset card.
 
202
 
203
  Update these steps with any organization-specific tooling you use.
 
cuebench.py DELETED
@@ -1,135 +0,0 @@
1
- import json
2
- import os
3
-
4
- from datasets import (
5
- BuilderConfig,
6
- DatasetInfo,
7
- Features,
8
- GeneratorBasedBuilder,
9
- Sequence,
10
- Split,
11
- SplitGenerator,
12
- Value,
13
- Version,
14
- )
15
- from fsspec.implementations.local import LocalFileSystem
16
-
17
- HF_DATA_BASE_URL = "https://huggingface.co/datasets/ishwarbb23/cuebench/resolve/main"
18
- DATA_BASE_OVERRIDE = os.getenv("CUEBENCH_DATA_BASE_URL")
19
- CLUE_METADATA_FILENAME = "clue_metadata.jsonl"
20
- MEP_METADATA_FILENAME = "mep_metadata.jsonl"
21
-
22
-
23
- class CUEBenchConfig(BuilderConfig):
24
- """Builder config that carries the backing metadata file."""
25
-
26
- def __init__(self, *, data_files=None, **kwargs):
27
- super().__init__(**kwargs)
28
- self.data_files = data_files or {"train": CLUE_METADATA_FILENAME}
29
-
30
-
31
- class CUEBench(GeneratorBasedBuilder):
32
- VERSION = Version("1.0.0")
33
- BUILDER_CONFIGS = [
34
- CUEBenchConfig(
35
- name="clue",
36
- version=VERSION,
37
- description="Contextual Unobserved Entity (CLUE) split with occluded-entity targets.",
38
- data_files={"train": CLUE_METADATA_FILENAME},
39
- ),
40
- CUEBenchConfig(
41
- name="mep",
42
- version=VERSION,
43
- description="Multi-Entity Prediction (MEP) split with complementary metadata.",
44
- data_files={"train": MEP_METADATA_FILENAME},
45
- ),
46
- ]
47
- DEFAULT_CONFIG_NAME = "clue"
48
-
49
- def _info(self):
50
- return DatasetInfo(
51
- description="CUEBench: Contextual Entity Prediction for Occluded or Unobserved Entities in Autonomous Driving.",
52
- features=Features({
53
- "image_id": Value("string"),
54
- "observed_classes": Sequence(Value("string")), # Properly represent lists
55
- "target_classes": Sequence(Value("string")),
56
- "image_path": Value("string")
57
- }),
58
- citation="@misc{cuebench2025, title={CUEBench: Contextual Unobserved Entity Benchmark}, year={2025}, author={CUEBench Authors}}",
59
- homepage="https://huggingface.co/datasets/ishwarbb23/cuebench",
60
- license="CC-BY-4.0",
61
- )
62
-
63
- def _split_generators(self, dl_manager):
64
- data_files = self.config.data_files or {"train": CLUE_METADATA_FILENAME}
65
- train_files = data_files["train"] if isinstance(data_files, dict) else data_files
66
- if isinstance(train_files, str):
67
- train_files = [train_files]
68
-
69
- resolved_files = [self._resolve_path(file_path, dl_manager) for file_path in train_files]
70
-
71
- return [SplitGenerator(name=Split.TRAIN, gen_kwargs={"filepaths": resolved_files})]
72
-
73
- def _resolve_path(self, file_path, dl_manager):
74
- if file_path.startswith(("http://", "https://")):
75
- resolved = dl_manager.download_and_extract(file_path)
76
- return resolved[0] if isinstance(resolved, list) else resolved
77
-
78
- local_path = os.path.join(os.path.dirname(os.path.realpath(__file__)), file_path)
79
- if os.path.exists(local_path):
80
- return local_path
81
-
82
- if DATA_BASE_OVERRIDE:
83
- override = DATA_BASE_OVERRIDE.rstrip("/")
84
- if override.startswith(("http://", "https://", "hf://")):
85
- remote_path = f"{override}/{file_path}"
86
- resolved = dl_manager.download_and_extract(remote_path)
87
- return resolved[0] if isinstance(resolved, list) else resolved
88
-
89
- override_candidate = os.path.join(override, file_path)
90
- if os.path.exists(override_candidate):
91
- return override_candidate
92
-
93
- remote_path = f"{HF_DATA_BASE_URL}/{file_path}"
94
- resolved = dl_manager.download_and_extract(remote_path)
95
- return resolved[0] if isinstance(resolved, list) else resolved
96
-
97
- def _generate_examples(self, filepaths):
98
- if isinstance(filepaths, str):
99
- filepaths = [filepaths]
100
-
101
- idx = 0
102
- for filepath in filepaths:
103
- with open(filepath, "r", encoding="utf-8") as f:
104
- for line in f:
105
- example = json.loads(line)
106
- image_id = example.get("aligned_id") or example.get("image_id")
107
- if image_id is None:
108
- raise ValueError(f"Missing image identifier for example at line {idx}.")
109
- yield idx, {
110
- "image_id": image_id,
111
- "image_path": example["image_path"],
112
- "observed_classes": example["detected_classes"],
113
- "target_classes": example["target_classes"],
114
- }
115
- idx += 1
116
-
117
- def _ensure_local_fs_protocol(self):
118
- if isinstance(getattr(self, "_fs", None), LocalFileSystem):
119
- protocol = getattr(self._fs, "protocol", None)
120
- if protocol != "file":
121
- self._fs.protocol = "file"
122
- output_dir = getattr(self, "_output_dir", None)
123
- if isinstance(output_dir, str):
124
- stripped = self._fs._strip_protocol(output_dir)
125
- if stripped:
126
- self._output_dir = stripped
127
-
128
- def _download_and_prepare(self, *args, **kwargs):
129
- result = super()._download_and_prepare(*args, **kwargs)
130
- self._ensure_local_fs_protocol()
131
- return result
132
-
133
- def as_dataset(self, *args, **kwargs):
134
- self._ensure_local_fs_protocol()
135
- return super().as_dataset(*args, **kwargs)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
data/clue/train.jsonl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:efa1537ff34d0d2b5f72e922e84f8cfa4114b1fb185025908d10cca04add5166
3
+ size 1101143
data/mep/train.jsonl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d8efcec1604098a9072d2848afaf4141a47a072760f1d19e28bab0e45b6020cf
3
+ size 845579
data/stats.json ADDED
@@ -0,0 +1,14 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "clue": {
3
+ "num_examples": 1648,
4
+ "num_bytes": 1101143,
5
+ "source_path": "raw/clue_metadata.jsonl",
6
+ "output_path": "data/clue/train.jsonl"
7
+ },
8
+ "mep": {
9
+ "num_examples": 1216,
10
+ "num_bytes": 845579,
11
+ "source_path": "raw/mep_metadata.jsonl",
12
+ "output_path": "data/mep/train.jsonl"
13
+ }
14
+ }
clue_metadata.jsonl → raw/clue_metadata.jsonl RENAMED
File without changes
mep_metadata.jsonl → raw/mep_metadata.jsonl RENAMED
File without changes
scripts/build_viewer_files.py ADDED
@@ -0,0 +1,97 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/usr/bin/env python3
2
+ """Utilities to convert the raw metadata dumps into viewer-friendly JSONL files."""
3
+ from __future__ import annotations
4
+
5
+ import json
6
+ from dataclasses import asdict, dataclass
7
+ from pathlib import Path
8
+ from typing import Dict, Iterator, MutableMapping
9
+
10
+ ROOT = Path(__file__).resolve().parents[1]
11
+ RAW_DIR = ROOT / "raw"
12
+ OUTPUT_DIR = ROOT / "data"
13
+
14
+ CONFIG_SOURCES: Dict[str, Path] = {
15
+ "clue": RAW_DIR / "clue_metadata.jsonl",
16
+ "mep": RAW_DIR / "mep_metadata.jsonl",
17
+ }
18
+
19
+ @dataclass
20
+ class BuildStats:
21
+ """Simple container for summary numbers we surface in README/stats.json."""
22
+
23
+ num_examples: int
24
+ num_bytes: int
25
+ source_path: str
26
+ output_path: str
27
+
28
+ def as_dict(self) -> Dict[str, object]:
29
+ return asdict(self)
30
+
31
+
32
+ def _normalize_record(record: MutableMapping[str, object]) -> MutableMapping[str, object]:
33
+ """Add the columns expected by the README and dataset viewer."""
34
+
35
+ image_id = record.get("aligned_id") or record.get("image_id")
36
+ if image_id is None:
37
+ seq = record.get("seq_name", "seq")
38
+ frame = record.get("frame_count", 0)
39
+ image_id = f"{seq}.{int(frame):05d}"
40
+ record["image_id"] = image_id
41
+
42
+ observed = record.get("observed_classes") or record.get("detected_classes") or []
43
+ record["observed_classes"] = observed
44
+ # Preserve the detected_classes alias so legacy tooling keeps working.
45
+ record.setdefault("detected_classes", observed)
46
+
47
+ record["target_classes"] = record.get("target_classes", [])
48
+ record["image_path"] = record.get("image_path")
49
+ return record
50
+
51
+
52
+ def _iter_records(path: Path) -> Iterator[MutableMapping[str, object]]:
53
+ with path.open("r", encoding="utf-8") as src:
54
+ for line in src:
55
+ if not line.strip():
56
+ continue
57
+ yield json.loads(line)
58
+
59
+
60
+ def build_split(config_name: str, source_path: Path, output_path: Path) -> BuildStats:
61
+ output_path.parent.mkdir(parents=True, exist_ok=True)
62
+ count = 0
63
+ with output_path.open("w", encoding="utf-8") as dst:
64
+ for record in _iter_records(source_path):
65
+ normalized = _normalize_record(record)
66
+ dst.write(json.dumps(normalized, ensure_ascii=False) + "\n")
67
+ count += 1
68
+ num_bytes = output_path.stat().st_size
69
+ return BuildStats(
70
+ num_examples=count,
71
+ num_bytes=num_bytes,
72
+ source_path=str(source_path.relative_to(ROOT)),
73
+ output_path=str(output_path.relative_to(ROOT)),
74
+ )
75
+
76
+
77
+ def main() -> None:
78
+ stats: Dict[str, Dict[str, object]] = {}
79
+ for config_name, source in CONFIG_SOURCES.items():
80
+ if not source.exists():
81
+ raise FileNotFoundError(f"Missing source file for {config_name}: {source}")
82
+ output_path = OUTPUT_DIR / config_name / "train.jsonl"
83
+ summary = build_split(config_name, source, output_path)
84
+ stats[config_name] = summary.as_dict()
85
+ print(
86
+ f"[{config_name}] wrote {summary.num_examples} examples -> {summary.output_path} "
87
+ f"({summary.num_bytes} bytes)."
88
+ )
89
+ stats_path = OUTPUT_DIR / "stats.json"
90
+ with stats_path.open("w", encoding="utf-8") as handle:
91
+ json.dump(stats, handle, indent=2)
92
+ handle.write("\n")
93
+ print(f"Wrote summary stats to {stats_path.relative_to(ROOT)}")
94
+
95
+
96
+ if __name__ == "__main__":
97
+ main()
scripts/push_to_hub.py ADDED
@@ -0,0 +1,67 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/usr/bin/env python3
2
+ """Upload the prepared viewer files to the Hugging Face Hub."""
3
+ from __future__ import annotations
4
+
5
+ import argparse
6
+ import json
7
+ from pathlib import Path
8
+ from typing import Dict
9
+
10
+ from datasets import Dataset
11
+
12
+ ROOT = Path(__file__).resolve().parents[1]
13
+ DATA_DIR = ROOT / "data"
14
+ CONFIGS = {
15
+ "clue": DATA_DIR / "clue" / "train.jsonl",
16
+ "mep": DATA_DIR / "mep" / "train.jsonl",
17
+ }
18
+
19
+
20
+ def read_stats() -> Dict[str, Dict[str, int]]:
21
+ stats_path = DATA_DIR / "stats.json"
22
+ if not stats_path.exists():
23
+ return {}
24
+ with stats_path.open("r", encoding="utf-8") as handle:
25
+ return json.load(handle)
26
+
27
+
28
+ def push_config(config_name: str, file_path: Path, repo_id: str, args: argparse.Namespace) -> None:
29
+ if not file_path.exists():
30
+ raise FileNotFoundError(f"Missing file for config '{config_name}': {file_path}")
31
+ print(f"Loading {config_name} from {file_path.relative_to(ROOT)} ...")
32
+ dataset = Dataset.from_json(str(file_path))
33
+ dataset.push_to_hub(
34
+ repo_id=repo_id,
35
+ config_name=config_name,
36
+ split=args.split,
37
+ private=args.private,
38
+ token=args.token,
39
+ branch=args.branch,
40
+ max_shard_size=args.max_shard_size,
41
+ )
42
+ print(f"✓ Uploaded {config_name}/{args.split} to {repo_id}")
43
+
44
+
45
+ def main() -> None:
46
+ parser = argparse.ArgumentParser(description=__doc__)
47
+ parser.add_argument("--repo", required=True, help="Target dataset repo id, e.g. user/cuebench")
48
+ parser.add_argument("--split", default="train", help="Split name to register on the Hub (default: train)")
49
+ parser.add_argument("--branch", default=None, help="Optional branch to push to (defaults to repo default)")
50
+ parser.add_argument("--token", default=None, help="HuggingFace token; omit if already logged in via CLI")
51
+ parser.add_argument("--private", action="store_true", help="Mark the uploaded dataset as private")
52
+ parser.add_argument("--max-shard-size", default="500MB", help="Shard threshold passed to push_to_hub")
53
+ args = parser.parse_args()
54
+
55
+ stats = read_stats()
56
+ for config_name, path in CONFIGS.items():
57
+ push_config(config_name, path, args.repo, args)
58
+ if config_name in stats:
59
+ summary = stats[config_name]
60
+ print(
61
+ f" ↳ stats: {summary.get('num_examples', '?')} examples, "
62
+ f"{summary.get('num_bytes', '?')} bytes"
63
+ )
64
+
65
+
66
+ if __name__ == "__main__":
67
+ main()