Datasets:
File size: 3,585 Bytes
d4cd991 12d9d59 d4cd991 12d9d59 d4cd991 12d9d59 d4cd991 12d9d59 d4cd991 12d9d59 d4cd991 12d9d59 d4cd991 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 | ---
license: mit
task_categories:
- visual-question-answering
language:
- en
tags:
- visual-reasoning
- VQA
- synthetic
- domain-robustness
- CLEVR
pretty_name: Super-CLEVR
size_categories:
- 100K<n<1M
---
# Super-CLEVR: A Virtual Benchmark to Diagnose Domain Robustness in Visual Reasoning
**[CVPR 2023 Highlight (top 2.5%)]**
Paper: [Super-CLEVR: A Virtual Benchmark to Diagnose Domain Robustness in Visual Reasoning](https://arxiv.org/abs/2212.00259)
**Authors:** Zhuowan Li, Xingrui Wang, Elias Stengel-Eskin, Adam Kortylewski, Wufei Ma, Benjamin Van Durme, Alan Yuille
## Dataset Description
Super-CLEVR is a synthetic dataset designed to systematically study the **domain robustness** of visual reasoning models across four key factors:
- **Visual complexity** — varying levels of scene and object complexity
- **Question redundancy** — controlling redundant information in questions
- **Concept distribution** — shifts in the distribution of visual concepts
- **Concept compositionality** — novel compositions of known concepts
## Dataset
Super-CLEVR contains 30k images of vehicles (from [UDA-Part](https://qliu24.github.io/udapart/)) randomly placed in the scenes, with 10 question-answer pairs for each image. The vehicles have part annotations and so the objects in the images can have distinct part attributes.
Here [[link]](https://www.cs.jhu.edu/~zhuowan/zhuowan/SuperCLEVR/obj_part_list/all_objects.html) is the list of objects and parts in Super-CLEVR scenes.
The first 20k images and paired are used for training, the next 5k for validation and the last 5k for testing.
The dataset is available on [Hugging Face](https://huggingface.co/datasets/RyanWW/Super-CLEVR):
| Data | Download Link |
|--------------------------|---|
| images | [images.zip](https://huggingface.co/datasets/RyanWW/Super-CLEVR/resolve/main/images.zip?download=true) |
| scenes | [superCLEVR_scenes.json](https://huggingface.co/datasets/RyanWW/Super-CLEVR/resolve/main/superCLEVR_scenes.json?download=true) |
| questions | [superCLEVR_questions_30k.json](https://huggingface.co/datasets/RyanWW/Super-CLEVR/resolve/main/superCLEVR_questions_30k.json?download=true) |
| questions (- redundancy) | [superCLEVR_questions_30k_NoRedundant.json](https://huggingface.co/datasets/RyanWW/Super-CLEVR/resolve/main/superCLEVR_questions_30k_NoRedundant.json?download=true) |
| questions (+ redundancy) | [superCLEVR_questions_30k_AllRedundant.json](https://huggingface.co/datasets/RyanWW/Super-CLEVR/resolve/main/superCLEVR_questions_30k_AllRedundant.json?download=true) |
## Usage
```python
from huggingface_hub import hf_hub_download
# Download a specific file
path = hf_hub_download(
repo_id="RyanWW/Super-CLEVR",
filename="superCLEVR_questions_30k.json",
repo_type="dataset",
)
```
## Citation
```bibtex
@inproceedings{li2023super,
title={Super-CLEVR: A Virtual Benchmark to Diagnose Domain Robustness in Visual Reasoning},
author={Li, Zhuowan and Wang, Xingrui and Stengel-Eskin, Elias and Kortylewski, Adam and Ma, Wufei and Van Durme, Benjamin and Yuille, Alan L},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
pages={14963--14973},
year={2023}
}
```
## Links
- **Code:** [github.com/Lizw14/Super-CLEVR](https://github.com/Lizw14/Super-CLEVR)
- **Paper:** [arxiv.org/abs/2212.00259](https://arxiv.org/abs/2212.00259)
## License
This dataset is released under the [MIT License](https://opensource.org/licenses/MIT).
|