Ishwar Balappanawar commited on
Commit
2dce2d8
·
1 Parent(s): 98d1657

Prepare dataset for Hugging Face

Browse files
Files changed (4) hide show
  1. .gitattributes +2 -0
  2. README.md +94 -5
  3. cuebench.py +33 -7
  4. requirements.txt +5 -0
.gitattributes ADDED
@@ -0,0 +1,2 @@
 
 
 
1
+ metadata.jsonl filter=lfs diff=lfs merge=lfs -text
2
+ images/* filter=lfs diff=lfs merge=lfs -text
README.md CHANGED
@@ -2,16 +2,105 @@
2
 
3
  CUEBench is a neurosymbolic benchmark that emphasizes **contextual entity prediction** in autonomous driving scenes. Unlike traditional detection tasks, CUEBench focuses on reasoning over **unobserved entities** — objects that may be occluded, out-of-frame, or affected by sensor failures.
4
 
5
- ## Task
6
 
7
- **Input**: A scene ID and a set of `observed_classes` present in the scene
8
- **Output**: Predict the `target_classes` that were present but unobserved
 
 
9
 
10
- ### Example
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
11
  ```json
12
  {
13
  "image_id": "00003.00019",
14
  "observed_classes": ["Car", "Bus", "Pedestrian"],
15
- "target_classes": ["PickupTruck"]
 
16
  }
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
17
 
 
2
 
3
  CUEBench is a neurosymbolic benchmark that emphasizes **contextual entity prediction** in autonomous driving scenes. Unlike traditional detection tasks, CUEBench focuses on reasoning over **unobserved entities** — objects that may be occluded, out-of-frame, or affected by sensor failures.
4
 
5
+ ## Dataset Summary
6
 
7
+ - **Modalities**: RGB dashcam imagery + symbolic annotations (provided as metadata)
8
+ - **Primary task**: Predict unobserved `target_classes` given the set of `observed_classes` in a scene
9
+ - **Geography / Scenario**: Urban autonomous driving across diverse traffic densities
10
+ - **License**: CC-BY-4.0 (you may adapt if different licensing is desired)
11
 
12
+ ## Dataset Structure
13
+
14
+ ### Data Fields
15
+ | Field | Type | Description |
16
+ | --- | --- | --- |
17
+ | `image_id` | `string` | Unique identifier for each frame (`aligned_id` in the raw metadata).
18
+ | `image_path` | `string` | Relative path to the rendered frame image.
19
+ | `observed_classes` | `list[string]` | Entity classes detected in-frame (cars, cones, pedestrians, etc.).
20
+ | `target_classes` | `list[string]` | Entities inferred to exist but unobserved (occluded, off-frame, sensor failure).
21
+
22
+ ### Splits
23
+ Currently only a **train** split is defined via `metadata.jsonl`. Additional splits can be created before upload if desired (e.g., hold out 10% for validation).
24
+
25
+ ### Label Taxonomy
26
+ Representative classes include: `Car`, `Bus`, `Pedestrian`, `PickupTruck`, `MediumSizedTruck`, `Animal`, `Standing`, `VehicleWithRider`, `ConstructionSign`, `TrafficCone`, and more (~40 classes). Extend this section with the final taxonomy before publication if you want exhaustive documentation.
27
+
28
+ ## Example Record
29
  ```json
30
  {
31
  "image_id": "00003.00019",
32
  "observed_classes": ["Car", "Bus", "Pedestrian"],
33
+ "target_classes": ["PickupTruck"],
34
+ "image_path": "images/00003.00019.png"
35
  }
36
+ ```
37
+
38
+ ## Usage
39
+
40
+ ### Loading with `datasets`
41
+ ```python
42
+ from datasets import load_dataset
43
+
44
+ dataset = load_dataset(
45
+ path="ishwarbb/cuebench",
46
+ split="train"
47
+ )
48
+ ```
49
+
50
+ ### Working From Source
51
+ ```python
52
+ from datasets import load_dataset
53
+
54
+ dataset = load_dataset(path="cuebench", data_files="metadata.jsonl", split="train")
55
+ ```
56
+
57
+ ## Metrics
58
+
59
+ `metric.py` defines **Mean Reciprocal Rank**, **Hits@K (1/3/5/10)**, and **Coverage@K (1/3/5/10)** over the predicted class rankings. When publishing to the Hugging Face Metrics Hub, expose the `compute(predictions, references)` signature so leaderboard integrations can consume it.
60
+
61
+ ## Licensing
62
+
63
+ The dataset is currently tagged as **CC-BY-4.0** in `cuebench.py`. Update this section if you select a different license.
64
+
65
+ ## Citation
66
+
67
+ ```
68
+ @misc{cuebench2025,
69
+ title = {CUEBench: Contextual Unobserved Entity Benchmark},
70
+ author = {CUEBench Authors},
71
+ year = {2025}
72
+ }
73
+ ```
74
+
75
+ ## Hugging Face Upload Checklist
76
+
77
+ 1. Install tools: `pip install datasets huggingface_hub` and run `huggingface-cli login`.
78
+ 2. Create the dataset repo: `huggingface-cli repo create cuebench --type dataset` (or via UI).
79
+ 3. Ensure directory layout:
80
+ ```
81
+ cuebench/
82
+ README.md
83
+ cuebench.py
84
+ metadata.jsonl
85
+ metric.py # optional metric script
86
+ images/... # optional or host separately
87
+ ```
88
+ 4. Initialize Git + LFS:
89
+ ```bash
90
+ cd cuebench
91
+ git init
92
+ git lfs install
93
+ git lfs track "*.jsonl" "images/*"
94
+ git remote add origin https://huggingface.co/datasets/ishwarbb/cuebench
95
+ git add .
96
+ git commit -m "Initial CUEBench dataset"
97
+ git push origin main
98
+ ```
99
+ 5. Validate locally before pushing updates (optional but recommended):
100
+ - `datasets-cli test ./cuebench.py --all_configs`
101
+ - `python -m datasets.prepare_module ./cuebench.py`
102
+ 6. On the Hub page, trigger the dataset preview to ensure the loader runs.
103
+ 7. (Optional) Publish the metric under `metrics/cuebench-metric` following the Metrics Hub template and link it from the dataset card.
104
+
105
+ Update these steps with any organization-specific tooling you use.
106
 
cuebench.py CHANGED
@@ -1,7 +1,27 @@
1
  import json
2
- from datasets import DatasetInfo, GeneratorBasedBuilder, SplitGenerator, Split, Value, Features, Sequence
 
 
 
 
 
 
 
 
 
 
3
 
4
  class CUEBench(GeneratorBasedBuilder):
 
 
 
 
 
 
 
 
 
 
5
  def _info(self):
6
  return DatasetInfo(
7
  description="CUEBench: Contextual Entity Prediction for Occluded or Unobserved Entities in Autonomous Driving.",
@@ -11,24 +31,30 @@ class CUEBench(GeneratorBasedBuilder):
11
  "target_classes": Sequence(Value("string")),
12
  "image_path": Value("string")
13
  }),
14
- citation="",
15
- homepage=""
 
16
  )
17
 
18
  def _split_generators(self, dl_manager):
19
- data_files = self.config.data_files
20
- filepath = dl_manager.download_and_extract(data_files["train"] if isinstance(data_files, dict) else data_files)
 
 
 
21
  return [SplitGenerator(name=Split.TRAIN, gen_kwargs={"filepath": filepath})]
22
 
23
  def _generate_examples(self, filepath):
24
- print("f = ", filepath)
25
  if isinstance(filepath, list):
26
  filepath = filepath[0]
27
  with open(filepath, "r", encoding="utf-8") as f:
28
  for idx, line in enumerate(f):
29
  example = json.loads(line)
 
 
 
30
  yield idx, {
31
- "image_id": example["aligned_id"], # Ensure this key exists in your JSONL
32
  "image_path": example["image_path"],
33
  "observed_classes": example["detected_classes"], # Already a list
34
  "target_classes": example["target_classes"],
 
1
  import json
2
+ from datasets import (
3
+ BuilderConfig,
4
+ DatasetInfo,
5
+ Features,
6
+ GeneratorBasedBuilder,
7
+ Sequence,
8
+ Split,
9
+ SplitGenerator,
10
+ Value,
11
+ Version,
12
+ )
13
 
14
  class CUEBench(GeneratorBasedBuilder):
15
+ VERSION = Version("1.0.0")
16
+ BUILDER_CONFIGS = [
17
+ BuilderConfig(
18
+ name="default",
19
+ version=VERSION,
20
+ description="Contextual Unobserved Entity Benchmark leveraging autonomous driving scenes.",
21
+ )
22
+ ]
23
+ DEFAULT_CONFIG_NAME = "default"
24
+
25
  def _info(self):
26
  return DatasetInfo(
27
  description="CUEBench: Contextual Entity Prediction for Occluded or Unobserved Entities in Autonomous Driving.",
 
31
  "target_classes": Sequence(Value("string")),
32
  "image_path": Value("string")
33
  }),
34
+ citation="@misc{cuebench2025, title={CUEBench: Contextual Unobserved Entity Benchmark}, year={2025}, author={CUEBench Authors}}",
35
+ homepage="https://huggingface.co/datasets/ishwarbb/cuebench",
36
+ license="CC-BY-4.0",
37
  )
38
 
39
  def _split_generators(self, dl_manager):
40
+ data_files = self.config.data_files or {"train": "metadata.jsonl"}
41
+ train_files = data_files["train"] if isinstance(data_files, dict) else data_files
42
+ filepath = dl_manager.download_and_extract(train_files)
43
+ if isinstance(filepath, list):
44
+ filepath = filepath[0]
45
  return [SplitGenerator(name=Split.TRAIN, gen_kwargs={"filepath": filepath})]
46
 
47
  def _generate_examples(self, filepath):
 
48
  if isinstance(filepath, list):
49
  filepath = filepath[0]
50
  with open(filepath, "r", encoding="utf-8") as f:
51
  for idx, line in enumerate(f):
52
  example = json.loads(line)
53
+ image_id = example.get("aligned_id") or example.get("image_id")
54
+ if image_id is None:
55
+ raise ValueError(f"Missing image identifier for example at line {idx}.")
56
  yield idx, {
57
+ "image_id": image_id,
58
  "image_path": example["image_path"],
59
  "observed_classes": example["detected_classes"], # Already a list
60
  "target_classes": example["target_classes"],
requirements.txt ADDED
@@ -0,0 +1,5 @@
 
 
 
 
 
 
1
+ datasets==2.14.6
2
+ pyarrow==11.0.0
3
+ numpy==1.26.4
4
+ pandas==2.3.3
5
+ huggingface_hub==0.36.0