cuebench / README.md
Ishwar B
Expose image column for viewer
ac61d04
metadata
annotations_creators:
  - expert-generated
language_creators:
  - other
language: en
license: cc-by-4.0
multilinguality:
  - monolingual
size_categories:
  - 1K<n<10K
source_datasets:
  - combination
task_categories:
  - other
task_ids:
  - multi-label-classification
pretty_name: CUEBench
configs:
  - config_name: clue
    default: true
    data_files:
      - split: train
        path: data/clue/train.jsonl
  - config_name: mep
    data_files:
      - split: train
        path: data/mep/train.jsonl
dataset_info:
  - config_name: clue
    features:
      - name: id
        dtype: int64
      - name: seq_name
        dtype: string
      - name: frame_count
        dtype: int64
      - name: aligned_id
        dtype: string
      - name: image_id
        dtype: string
      - name: observed_classes
        sequence: string
      - name: target_classes
        sequence: string
      - name: detected_classes
        sequence: string
      - name: image_path
        dtype: string
      - name: image
        dtype: image
    splits:
      - name: train
        num_bytes: 1101143
        num_examples: 1648
    download_size: 1101143
    dataset_size: 1101143
  - config_name: mep
    features:
      - name: id
        dtype: int64
      - name: seq_name
        dtype: string
      - name: frame_count
        dtype: int64
      - name: aligned_id
        dtype: string
      - name: image_id
        dtype: string
      - name: observed_classes
      - name: image
        dtype: image
        sequence: string
      - name: target_classes
        sequence: string
      - name: detected_classes
        sequence: string
      - name: image_path
        dtype: string
    splits:
      - name: train
        num_bytes: 845579
        num_examples: 1216
    download_size: 845579
    dataset_size: 845579

CUEBench: Contextual Unobserved Entity Benchmark

CUEBench is a neurosymbolic benchmark that emphasizes contextual entity prediction in autonomous driving scenes. Unlike traditional detection tasks, CUEBench focuses on reasoning over unobserved entities — objects that may be occluded, out-of-frame, or affected by sensor failures.

Dataset Summary

  • Modalities: RGB dashcam imagery + symbolic annotations (provided as metadata)
  • Primary task: Predict unobserved target_classes given the set of observed_classes in a scene
  • Geography / Scenario: Urban autonomous driving across diverse traffic densities
  • License: CC-BY-4.0 (you may adapt if different licensing is desired)

Configurations

Config File Description
clue (default) data/clue/train.jsonl Contextual Unobserved Entity (CLUE) frames with heavy occlusions and single-target predictions.
mep data/mep/train.jsonl Multi-Entity Prediction (MEP) split that introduces complementary metadata and more diverse target sets.

When this dataset is viewed on Hugging Face, the dataset viewer automatically exposes a config dropdown so you can switch between clue and mep without leaving the UI.

Dataset Structure

Data Fields

Field Type Description
image_id string Unique identifier for each frame (aligned_id in the raw metadata).
image_path string Relative path to the rendered frame image.
observed_classes list[string] Entity classes detected in-frame (cars, cones, pedestrians, etc.).
target_classes list[string] Entities inferred to exist but unobserved (occluded, off-frame, sensor failure).

Splits

Each configuration exposes a single train split sourced from either clue_metadata.jsonl or mep_metadata.jsonl. Feel free to carve out validation/test subsets before upload if you need them.

Label Taxonomy

Representative classes include: Car, Bus, Pedestrian, PickupTruck, MediumSizedTruck, Animal, Standing, VehicleWithRider, ConstructionSign, TrafficCone, and more (~40 classes). Extend this section with the final taxonomy before publication if you want exhaustive documentation.

Example Record

{
  "image_id": "00003.00019",
  "observed_classes": ["Car", "Bus", "Pedestrian"],
  "target_classes": ["PickupTruck"],
  "image_path": "images/00003.00019.png"
}

Usage

Loading with datasets

from datasets import load_dataset

dataset = load_dataset(
  path="ishwarbb23/cuebench",
  split="train",
  config_name="clue",  # or "mep"
)

Working From Source

from datasets import load_dataset

dataset = load_dataset(
  path="json",
  data_files={"train": "data/clue/train.jsonl"},  # swap with data/mep/train.jsonl
  split="train",
)

Tip: From source, you can still switch configurations by pointing data_files to data/mep/train.jsonl.

Regenerating viewer files

The repository keeps the original metadata dumps under raw/. To refresh the viewer-friendly JSONL files (e.g. after updating the raw annotations), run:

/.venv/bin/python scripts/build_viewer_files.py

This script adds the derived columns (image_id, observed_classes, etc.) and drops the converted files into data/clue/train.jsonl and data/mep/train.jsonl. It also updates data/stats.json, which is referenced by the dataset card to keep dataset_info counters accurate.

Metrics

metric.py defines Mean Reciprocal Rank, Hits@K (1/3/5/10), and Coverage@K (1/3/5/10) over the predicted class rankings. When publishing to the Hugging Face Metrics Hub, expose the compute(predictions, references) signature so leaderboard integrations can consume it.

Licensing

The dataset is currently tagged as CC-BY-4.0. Update this section if you select a different license.

Citation

@misc{cuebench2025,
  title  = {CUEBench: Contextual Unobserved Entity Benchmark},
  author = {CUEBench Authors},
  year   = {2025}
}

Hugging Face Upload Checklist

  1. Install tools: pip install datasets huggingface_hub and run huggingface-cli login.
  2. Create the dataset repo: huggingface-cli repo create cuebench --type dataset (or via UI).
  3. Ensure directory layout:
    cuebench/
      README.md
     data/
       clue/train.jsonl
       mep/train.jsonl
     raw/
       clue_metadata.jsonl
       mep_metadata.jsonl
      metric.py           # optional metric script
     scripts/build_viewer_files.py
     scripts/push_to_hub.py
      images/...          # optional or host separately
    
  4. Initialize Git + LFS:
    cd cuebench
    git init
    git lfs install
    git lfs track "*.jsonl" "images/*"
      git remote add origin https://huggingface.co/datasets/ishwarbb23/cuebench
    git add .
    git commit -m "Initial CUEBench dataset"
    git push origin main
    
  5. Regenerate viewer files anytime the raw metadata changes: /.venv/bin/python scripts/build_viewer_files.py
  6. Push the prepared splits to the Hub (per config) using /.venv/bin/python scripts/push_to_hub.py --repo ishwarbb23/cuebench
  7. On the Hub page, trigger the dataset preview to ensure the loader runs.
  8. (Optional) Publish the metric under metrics/cuebench-metric following the Metrics Hub template and link it from the dataset card.

Update these steps with any organization-specific tooling you use.