--- annotations_creators: - expert-generated language_creators: - other language: en license: cc-by-4.0 multilinguality: - monolingual size_categories: - 1K **Tip:** From source, you can still switch configurations by pointing `data_files` to `data/mep/train.jsonl`. ### Regenerating viewer files The repository keeps the original metadata dumps under `raw/`. To refresh the viewer-friendly JSONL files (e.g. after updating the raw annotations), run: ```bash /.venv/bin/python scripts/build_viewer_files.py ``` This script adds the derived columns (`image_id`, `observed_classes`, etc.) and drops the converted files into `data/clue/train.jsonl` and `data/mep/train.jsonl`. It also updates `data/stats.json`, which is referenced by the dataset card to keep `dataset_info` counters accurate. ## Metrics `metric.py` defines **Mean Reciprocal Rank**, **Hits@K (1/3/5/10)**, and **Coverage@K (1/3/5/10)** over the predicted class rankings. When publishing to the Hugging Face Metrics Hub, expose the `compute(predictions, references)` signature so leaderboard integrations can consume it. ## Licensing The dataset is currently tagged as **CC-BY-4.0**. Update this section if you select a different license. ## Citation ``` @misc{cuebench2025, title = {CUEBench: Contextual Unobserved Entity Benchmark}, author = {CUEBench Authors}, year = {2025} } ``` ## Hugging Face Upload Checklist 1. Install tools: `pip install datasets huggingface_hub` and run `huggingface-cli login`. 2. Create the dataset repo: `huggingface-cli repo create cuebench --type dataset` (or via UI). 3. Ensure directory layout: ``` cuebench/ README.md data/ clue/train.jsonl mep/train.jsonl raw/ clue_metadata.jsonl mep_metadata.jsonl metric.py # optional metric script scripts/build_viewer_files.py scripts/push_to_hub.py images/... # optional or host separately ``` 4. Initialize Git + LFS: ```bash cd cuebench git init git lfs install git lfs track "*.jsonl" "images/*" git remote add origin https://huggingface.co/datasets/ishwarbb23/cuebench git add . git commit -m "Initial CUEBench dataset" git push origin main ``` 5. Regenerate viewer files anytime the raw metadata changes: `/.venv/bin/python scripts/build_viewer_files.py` 6. Push the prepared splits to the Hub (per config) using `/.venv/bin/python scripts/push_to_hub.py --repo ishwarbb23/cuebench` 7. On the Hub page, trigger the dataset preview to ensure the loader runs. 8. (Optional) Publish the metric under `metrics/cuebench-metric` following the Metrics Hub template and link it from the dataset card. Update these steps with any organization-specific tooling you use.