RyanWW commited on
Commit
d4cd991
·
verified ·
1 Parent(s): 92882c6

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +89 -0
README.md ADDED
@@ -0,0 +1,89 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ task_categories:
4
+ - visual-question-answering
5
+ language:
6
+ - en
7
+ tags:
8
+ - visual-reasoning
9
+ - VQA
10
+ - synthetic
11
+ - domain-robustness
12
+ - CLEVR
13
+ pretty_name: Super-CLEVR
14
+ size_categories:
15
+ - 100K<n<1M
16
+ ---
17
+
18
+ # Super-CLEVR: A Virtual Benchmark to Diagnose Domain Robustness in Visual Reasoning
19
+
20
+ **[CVPR 2023 Highlight (top 2.5%)]**
21
+
22
+ Paper: [Super-CLEVR: A Virtual Benchmark to Diagnose Domain Robustness in Visual Reasoning](https://arxiv.org/abs/2212.00259)
23
+
24
+ **Authors:** Zhuowan Li, Xingrui Wang, Elias Stengel-Eskin, Adam Kortylewski, Wufei Ma, Benjamin Van Durme, Alan Yuille
25
+
26
+ ## Dataset Description
27
+
28
+ Super-CLEVR is a synthetic dataset designed to systematically study the **domain robustness** of visual reasoning models across four key factors:
29
+
30
+ - **Visual complexity** — varying levels of scene and object complexity
31
+ - **Question redundancy** — controlling redundant information in questions
32
+ - **Concept distribution** — shifts in the distribution of visual concepts
33
+ - **Concept compositionality** — novel compositions of known concepts
34
+
35
+ ## Dataset Structure
36
+
37
+ Super-CLEVR contains **30,000 images** of vehicles (sourced from [UDA-Part](https://github.com/TACJu/UDA-Part)) randomly placed in 3D-rendered scenes, with **10 question-answer pairs per image** (300k QA pairs total). Vehicles include part-level annotations, enabling questions about distinct part attributes.
38
+
39
+ ### Splits
40
+
41
+ | Split | Images |
42
+ |------------|-------------|
43
+ | Train | 20,000 |
44
+ | Validation | 5,000 |
45
+ | Test | 5,000 |
46
+
47
+ ### Files
48
+
49
+ | File | Description |
50
+ |------|-------------|
51
+ | `images.zip` | 30k rendered scene images |
52
+ | `superCLEVR_scenes.json` | Scene annotations (objects, parts, spatial relations) |
53
+ | `superCLEVR_questions_30k.json` | Standard question-answer pairs |
54
+ | `superCLEVR_questions_30k_NoRedundant.json` | Questions with redundancy removed |
55
+ | `superCLEVR_questions_30k_AllRedundant.json` | Questions with maximum redundancy |
56
+
57
+ ## Usage
58
+
59
+ ```python
60
+ from huggingface_hub import hf_hub_download
61
+
62
+ # Download a specific file
63
+ path = hf_hub_download(
64
+ repo_id="RyanWW/Super-CLEVR",
65
+ filename="superCLEVR_questions_30k.json",
66
+ repo_type="dataset",
67
+ )
68
+ ```
69
+
70
+ ## Citation
71
+
72
+ ```bibtex
73
+ @inproceedings{li2023super,
74
+ title={Super-CLEVR: A Virtual Benchmark to Diagnose Domain Robustness in Visual Reasoning},
75
+ author={Li, Zhuowan and Wang, Xingrui and Stengel-Eskin, Elias and Kortylewski, Adam and Ma, Wufei and Van Durme, Benjamin and Yuille, Alan L},
76
+ booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
77
+ pages={14963--14973},
78
+ year={2023}
79
+ }
80
+ ```
81
+
82
+ ## Links
83
+
84
+ - **Code:** [github.com/Lizw14/Super-CLEVR](https://github.com/Lizw14/Super-CLEVR)
85
+ - **Paper:** [arxiv.org/abs/2212.00259](https://arxiv.org/abs/2212.00259)
86
+
87
+ ## License
88
+
89
+ This dataset is released under the [MIT License](https://opensource.org/licenses/MIT).