cgqa / README.md
nihalnayak's picture
Create README.md
44a49dc verified
---
task_categories:
- image-classification
tags:
- composition
---
# Dataset Card for CGQA dataset
This is the CGQA dataset from the [Learning Graph Embeddings for Compositional Zero-shot Learning](https://arxiv.org/abs/2102.01987) paper.
## Citation
If you use this dataset, please cite the following papers:
```
@inproceedings{naeem2021learning,
title={Learning graph embeddings for compositional zero-shot learning},
author={Naeem, Muhammad Ferjad and Xian, Yongqin and Tombari, Federico and Akata, Zeynep},
booktitle={Proceedings of the IEEE/CVF conference on computer vision and pattern recognition},
pages={953--962},
year={2021}
}
```
CGQA is derived from the GQA datset
```
@inproceedings{hudson2019gqa,
title={Gqa: A new dataset for real-world visual reasoning and compositional question answering},
author={Hudson, Drew A and Manning, Christopher D},
booktitle={Proceedings of the IEEE/CVF conference on computer vision and pattern recognition},
pages={6700--6709},
year={2019}
}
```
The GQA dataset is derived from Visual Genome
```
@article{krishna2017visual,
title={Visual genome: Connecting language and vision using crowdsourced dense image annotations},
author={Krishna, Ranjay and Zhu, Yuke and Groth, Oliver and Johnson, Justin and Hata, Kenji and Kravitz, Joshua and Chen, Stephanie and Kalantidis, Yannis and Li, Li-Jia and Shamma, David A and others},
journal={International journal of computer vision},
volume={123},
number={1},
pages={32--73},
year={2017},
publisher={Springer}
}
```
If you use this dataset with [compositional soft prompting](https://arxiv.org/abs/2204.03574), then cite this paper:
```
@inproceedings{
csp2023,
title={Learning to Compose Soft Prompts for Compositional Zero-Shot Learning},
author={Nihal V. Nayak and Peilin Yu and Stephen H. Bach},
booktitle={International Conference on Learning Representations},
year={2023}
}
```