File size: 7,031 Bytes
59813c2 56b1ab6 59813c2 56b1ab6 59813c2 56b1ab6 59813c2 56b1ab6 59813c2 56b1ab6 59813c2 56b1ab6 59813c2 56b1ab6 59813c2 56b1ab6 6a5d672 56b1ab6 6a5d672 56b1ab6 ac61d04 56b1ab6 6a5d672 56b1ab6 ac61d04 56b1ab6 6a5d672 56b1ab6 59813c2 98d1657 2dce2d8 98d1657 2dce2d8 98d1657 45675a1 56b1ab6 45675a1 2dce2d8 45675a1 2dce2d8 98d1657 2dce2d8 98d1657 2dce2d8 0eed46c 45675a1 2dce2d8 45675a1 56b1ab6 45675a1 2dce2d8 56b1ab6 45675a1 2dce2d8 56b1ab6 2dce2d8 56b1ab6 2dce2d8 56b1ab6 2dce2d8 0eed46c 2dce2d8 56b1ab6 2dce2d8 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 |
---
annotations_creators:
- expert-generated
language_creators:
- other
language: en
license: cc-by-4.0
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- combination
task_categories:
- other
task_ids:
- multi-label-classification
pretty_name: CUEBench
configs:
- config_name: clue
default: true
data_files:
- split: train
path: data/clue/train.jsonl
- config_name: mep
data_files:
- split: train
path: data/mep/train.jsonl
dataset_info:
- config_name: clue
features:
- name: id
dtype: int64
- name: seq_name
dtype: string
- name: frame_count
dtype: int64
- name: aligned_id
dtype: string
- name: image_id
dtype: string
- name: observed_classes
sequence: string
- name: target_classes
sequence: string
- name: detected_classes
sequence: string
- name: image_path
dtype: string
- name: image
dtype: image
splits:
- name: train
num_bytes: 1101143
num_examples: 1648
download_size: 1101143
dataset_size: 1101143
- config_name: mep
features:
- name: id
dtype: int64
- name: seq_name
dtype: string
- name: frame_count
dtype: int64
- name: aligned_id
dtype: string
- name: image_id
dtype: string
- name: observed_classes
- name: image
dtype: image
sequence: string
- name: target_classes
sequence: string
- name: detected_classes
sequence: string
- name: image_path
dtype: string
splits:
- name: train
num_bytes: 845579
num_examples: 1216
download_size: 845579
dataset_size: 845579
---
# CUEBench: Contextual Unobserved Entity Benchmark
CUEBench is a neurosymbolic benchmark that emphasizes **contextual entity prediction** in autonomous driving scenes. Unlike traditional detection tasks, CUEBench focuses on reasoning over **unobserved entities** — objects that may be occluded, out-of-frame, or affected by sensor failures.
## Dataset Summary
- **Modalities**: RGB dashcam imagery + symbolic annotations (provided as metadata)
- **Primary task**: Predict unobserved `target_classes` given the set of `observed_classes` in a scene
- **Geography / Scenario**: Urban autonomous driving across diverse traffic densities
- **License**: CC-BY-4.0 (you may adapt if different licensing is desired)
### Configurations
| Config | File | Description |
| --- | --- | --- |
| `clue` *(default)* | `data/clue/train.jsonl` | Contextual Unobserved Entity (CLUE) frames with heavy occlusions and single-target predictions. |
| `mep` | `data/mep/train.jsonl` | Multi-Entity Prediction (MEP) split that introduces complementary metadata and more diverse target sets. |
When this dataset is viewed on Hugging Face, the dataset viewer automatically exposes a **config dropdown** so you can switch between `clue` and `mep` without leaving the UI.
## Dataset Structure
### Data Fields
| Field | Type | Description |
| --- | --- | --- |
| `image_id` | `string` | Unique identifier for each frame (`aligned_id` in the raw metadata).
| `image_path` | `string` | Relative path to the rendered frame image.
| `observed_classes` | `list[string]` | Entity classes detected in-frame (cars, cones, pedestrians, etc.).
| `target_classes` | `list[string]` | Entities inferred to exist but unobserved (occluded, off-frame, sensor failure).
### Splits
Each configuration exposes a single **train** split sourced from either `clue_metadata.jsonl` or `mep_metadata.jsonl`. Feel free to carve out validation/test subsets before upload if you need them.
### Label Taxonomy
Representative classes include: `Car`, `Bus`, `Pedestrian`, `PickupTruck`, `MediumSizedTruck`, `Animal`, `Standing`, `VehicleWithRider`, `ConstructionSign`, `TrafficCone`, and more (~40 classes). Extend this section with the final taxonomy before publication if you want exhaustive documentation.
## Example Record
```json
{
"image_id": "00003.00019",
"observed_classes": ["Car", "Bus", "Pedestrian"],
"target_classes": ["PickupTruck"],
"image_path": "images/00003.00019.png"
}
```
## Usage
### Loading with `datasets`
```python
from datasets import load_dataset
dataset = load_dataset(
path="ishwarbb23/cuebench",
split="train",
config_name="clue", # or "mep"
)
```
### Working From Source
```python
from datasets import load_dataset
dataset = load_dataset(
path="json",
data_files={"train": "data/clue/train.jsonl"}, # swap with data/mep/train.jsonl
split="train",
)
```
> **Tip:** From source, you can still switch configurations by pointing `data_files` to `data/mep/train.jsonl`.
### Regenerating viewer files
The repository keeps the original metadata dumps under `raw/`. To refresh the
viewer-friendly JSONL files (e.g. after updating the raw annotations), run:
```bash
/.venv/bin/python scripts/build_viewer_files.py
```
This script adds the derived columns (`image_id`, `observed_classes`, etc.) and
drops the converted files into `data/clue/train.jsonl` and
`data/mep/train.jsonl`. It also updates `data/stats.json`, which is referenced by
the dataset card to keep `dataset_info` counters accurate.
## Metrics
`metric.py` defines **Mean Reciprocal Rank**, **Hits@K (1/3/5/10)**, and **Coverage@K (1/3/5/10)** over the predicted class rankings. When publishing to the Hugging Face Metrics Hub, expose the `compute(predictions, references)` signature so leaderboard integrations can consume it.
## Licensing
The dataset is currently tagged as **CC-BY-4.0**. Update this section if you select a different license.
## Citation
```
@misc{cuebench2025,
title = {CUEBench: Contextual Unobserved Entity Benchmark},
author = {CUEBench Authors},
year = {2025}
}
```
## Hugging Face Upload Checklist
1. Install tools: `pip install datasets huggingface_hub` and run `huggingface-cli login`.
2. Create the dataset repo: `huggingface-cli repo create cuebench --type dataset` (or via UI).
3. Ensure directory layout:
```
cuebench/
README.md
data/
clue/train.jsonl
mep/train.jsonl
raw/
clue_metadata.jsonl
mep_metadata.jsonl
metric.py # optional metric script
scripts/build_viewer_files.py
scripts/push_to_hub.py
images/... # optional or host separately
```
4. Initialize Git + LFS:
```bash
cd cuebench
git init
git lfs install
git lfs track "*.jsonl" "images/*"
git remote add origin https://huggingface.co/datasets/ishwarbb23/cuebench
git add .
git commit -m "Initial CUEBench dataset"
git push origin main
```
5. Regenerate viewer files anytime the raw metadata changes: `/.venv/bin/python scripts/build_viewer_files.py`
6. Push the prepared splits to the Hub (per config) using `/.venv/bin/python scripts/push_to_hub.py --repo ishwarbb23/cuebench`
7. On the Hub page, trigger the dataset preview to ensure the loader runs.
8. (Optional) Publish the metric under `metrics/cuebench-metric` following the Metrics Hub template and link it from the dataset card.
Update these steps with any organization-specific tooling you use.
|