Datasets:
The dataset viewer is not available for this subset.
Exception: SplitsNotFoundError
Message: The split names could not be parsed from the dataset config.
Traceback: Traceback (most recent call last):
File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 286, in get_dataset_config_info
for split_generator in builder._split_generators(
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/webdataset/webdataset.py", line 82, in _split_generators
raise ValueError(
ValueError: The TAR archives of the dataset should be in WebDataset format, but the files in the archive don't share the same prefix or the same types.
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/config/split_names.py", line 65, in compute_split_names_from_streaming_response
for split in get_dataset_split_names(
^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 340, in get_dataset_split_names
info = get_dataset_config_info(
^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 291, in get_dataset_config_info
raise SplitsNotFoundError("The split names could not be parsed from the dataset config.") from err
datasets.inspect.SplitsNotFoundError: The split names could not be parsed from the dataset config.Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
DRAKE: Dynamic Real-world Agent Knowledge Evaluation
Official dataset repository for the DRAKE benchmark, proposed in:
Co-LoRA: Collaborative Model Personalization on Heterogeneous Multi-Modal Clients
Minhyuk Seo, Taeheon Kim, Hankook Lee, Jonghyun Choi, Tinne Tuytelaars
ICLR 2026 · Paper · Code
Overview
DRAKE is a multi-modal federated continual learning benchmark spanning 40 distinct tasks with realistic distribution shifts over time. It is specifically designed to evaluate personalized federated learning methods under both data heterogeneity (different tasks across clients) and temporal distribution shifts (tasks evolve over rounds).
Detailed Task Structure
Tasks in DRAKE are grouped into categories including multi-modal reasoning, visual relation understanding, and VQA, covering a wide range of real-world multi-modal agent capabilities.
Dataset Construction
The datasets included in DRAKE were sourced from publicly available multi-modal datasets. We apply additional processing — including reformatting into instruction-following QA pairs, temporal task splitting, and client-wise data partitioning — to construct a benchmark suitable for federated continual learning evaluation. For some datasets, we use original query-response pairs, and for some datasets, we use processed query-response pairs proposed in DEMON and Mantis benchmarks.
Constituent Datasets
| Dataset | Task Type | Reference |
|---|---|---|
| Fashion200K | Fashion relation QA / fashion image retrieval | Han et al., ICCV 2017 |
| FashionIQ | Fashion retrieval with language feedback | Wu et al., CVPR 2021 |
| NLVR2 | Dual-image visual reasoning | Suhr et al., ACL 2019 |
| CIRR | Contextual image retrieval / image-transition reasoning | Liu et al., ICCV 2021 |
| VISION | Industrial visual inspection / multi-image relation QA | Bai et al., arXiv 2023 |
| MagicBrush | Instruction-guided image editing | Zhang et al., NeurIPS 2023 |
| VSR | Visual spatial reasoning | Liu et al., TACL 2023 |
| SEED-Bench-2 | Multimodal reasoning benchmark; temporal / instance-level / KG QA | Li et al., CVPR 2024 |
| IRFL | Visual figurative language understanding | Yosef et al., EMNLP 2023 |
| Bongard-HOI | Few-shot HOI visual reasoning | Jiang et al., CVPR 2022 |
| Bongard-OpenWorld | Open-world visual concept reasoning | Wu et al., ICLR 2024 |
| COMICS | Comic-panel / multi-image-text reasoning | Iyyer et al., CVPR 2017 |
| VizWiz | Accessibility-focused VQA; also used for question-answer difference reasoning in DRAKE | Gurari et al., CVPR 2018 |
| MIT-States | Visual state recognition / compositional reasoning | Isola et al., CVPR 2015 |
| Co-Instruct | Image-quality and multi-image comparison QA | Wu et al., ECCV 2024 |
| TQA | Textbook question answering | Kembhavi et al., CVPR 2017 |
| DVQA | Data visualization / bar-chart QA | Kafle et al., CVPR 2018 |
| IconQA | Icon / diagram question answering | Lu et al., NeurIPS 2021 |
| WCVQA | World-cuisine visual question answering | Winata et al., arXiv 2024 |
| DreamSim | Unseen task: similar-image matching | Fu et al., arXiv 2023 |
| Mantis (Contrast-Caption subset) | Unseen task: caption-image matching (seen-image and unseen-image splits) | Jiang et al., TMLR 2024; Yu et al., arXiv 2022 |
| ImageCoDe | Unseen task: sequential image choice from contextual descriptions | Krojer et al., ACL 2022 |
| RecipeQA | Unseen task: incoherent image detection / next-image prediction in cooking procedures | Yagcioglu et al., EMNLP 2018 |
| HQ-Edit | Unseen task: image-edit instruction direction check | Hui et al., ICLR 2025 |
Note to contributors: If you find a dataset missing from this table, please open an issue or pull request on our GitHub repo.
Dataset Credits & References
DRAKE builds on the following publicly available datasets.
% ========================= % DRAKE training datasets % =========================
@inproceedings{han2017fashion200k, title = {Automatic Spatially-aware Fashion Concept Discovery}, author = {Han, Xintong and Wu, Zuxuan and Huang, Phoenix X. and Zhang, Xiao and Zhu, Menglong and Li, Yuan and Zhao, Yang and Davis, Larry S.}, booktitle = {ICCV}, year = {2017} }
@inproceedings{wu2021fashioniq, title = {{Fashion IQ}: A New Dataset Towards Retrieving Images by Natural Language Feedback}, author = {Wu, Hui and Gao, Yupeng and Guo, Xiaoxiao and Al-Halah, Ziad and Rennie, Steven and Grauman, Kristen and Feris, Rogerio}, booktitle = {CVPR}, year = {2021} }
@inproceedings{suhr2019nlvr2, title = {A Corpus for Reasoning about Natural Language Grounded in Photographs}, author = {Suhr, Alane and Zhou, Stephanie and Zhang, Ally and Zhang, Iris and Bai, Huajun and Artzi, Yoav}, booktitle = {ACL}, year = {2019} }
@inproceedings{liu2021cirr, title = {Image Retrieval on Real-life Images with Pre-trained Vision-and-Language Models}, author = {Liu, Zheyuan and Rodriguez-Opazo, Cristian and Teney, Damien and Gould, Stephen}, booktitle = {ICCV}, year = {2021} }
@article{bai2023vision, title = {{VISION} Datasets: A Benchmark for Vision-based Industrial Inspection}, author = {Bai, Haoping and Mou, Shancong and Likhomanenko, Tatiana and Cinbis, Ramazan Gokberk and Tuzel, Oncel and Huang, Ping and Shan, Jiulong and Shi, Jianjun and Cao, Meng}, journal = {arXiv preprint arXiv:2306.07890}, year = {2023} }
@article{zhang2023magicbrush, title = {{MagicBrush}: A Manually Annotated Dataset for Instruction-Guided Image Editing}, author = {Zhang, Kai and Mo, Lingbo and Chen, Wenhu and Sun, Huan and Su, Yu}, journal = {Advances in Neural Information Processing Systems}, volume = {36}, year = {2023} }
@article{liu2023vsr, title = {Visual Spatial Reasoning}, author = {Liu, Fangyu and Emerson, Guy and Collier, Nigel}, journal = {Transactions of the Association for Computational Linguistics}, volume = {11}, pages = {635--651}, year = {2023} }
@inproceedings{li2024seedbench2, title = {{SEED-Bench-2}: Benchmarking Multimodal Large Language Models}, author = {Li, Bohao and Ge, Yuying and Ge, Yixiao and Wang, Guangzhi and Wang, Rui and Zhang, Ruimao and Shan, Ying}, booktitle = {CVPR}, year = {2024} }
@inproceedings{yosef2023irfl, title = {{IRFL}: Image Recognition of Figurative Language}, author = {Yosef, Ron and Bitton, Yonatan and Shahaf, Dafna}, booktitle = {EMNLP}, year = {2023} }
@inproceedings{jiang2022bongardhoi, title = {{Bongard-HOI}: Benchmarking Few-Shot Visual Reasoning for Human-Object Interactions}, author = {Jiang, Huaizu and Ma, Xiaojian and Nie, Weili and Yu, Zhiding and Zhu, Yuke and Anandkumar, Anima}, booktitle = {CVPR}, year = {2022} }
@inproceedings{wu2024bongardopenworld, title = {{Bongard-OpenWorld}: Few-Shot Reasoning for Free-form Visual Concepts in the Real World}, author = {Wu, Rujie and Ma, Xiaojian and Zhang, Zhenliang and Wang, Wei and Li, Qing and Zhu, Song-Chun and Wang, Yizhou}, booktitle = {ICLR}, year = {2024} }
@inproceedings{iyyer2017comics, title = {The Amazing Mysteries of the Gutter: Drawing Inferences Between Panels in Comic Book Narratives}, author = {Iyyer, Mohit and Manjunatha, Varun and Guha, Anupam and Vyas, Yogarshi and Boyd-Graber, Jordan and Daume III, Hal and Davis, Larry S.}, booktitle = {CVPR}, year = {2017} }
@inproceedings{gurari2018vizwiz, title = {VizWiz Grand Challenge: Answering Visual Questions from Blind People}, author = {Gurari, Danna and Li, Qing and Stangl, Abigale J. and Guo, Anhong and Lin, Chi and Grauman, Kristen and Luo, Jiebo and Bigham, Jeffrey P.}, booktitle = {CVPR}, year = {2018} }
@inproceedings{isola2015mitstates, title = {Discovering States and Transformations in Image Collections}, author = {Isola, Phillip and Lim, Joseph J. and Adelson, Edward H.}, booktitle = {CVPR}, year = {2015} }
@inproceedings{wu2024coinstruct, title = {Towards Open-Ended Visual Quality Comparison}, author = {Wu, Haoning and Zhu, Hanwei and Zhang, Zicheng and Zhang, Erli and Chen, Chaofeng and Liao, Liang and Li, Chunyi and Wang, Annan and Sun, Wenxiu and Yan, Qiong and Liu, Xiaohong and Zhai, Guangtao and Wang, Shiqi and Lin, Weisi}, booktitle = {ECCV}, year = {2024} }
@inproceedings{kembhavi2017tqa, title = {Are You Smarter Than a Sixth Grader? Textbook Question Answering for Multimodal Machine Comprehension}, author = {Kembhavi, Aniruddha and Seo, Minjoon and Schwenk, Dustin and Choi, Jonghyun and Farhadi, Ali and Hajishirzi, Hannaneh}, booktitle = {CVPR}, year = {2017} }
@inproceedings{kafle2018dvqa, title = {{DVQA}: Understanding Data Visualizations via Question Answering}, author = {Kafle, Kushal and Price, Brian and Cohen, Scott and Kanan, Christopher}, booktitle = {CVPR}, year = {2018} }
@inproceedings{lu2021iconqa, title = {{IconQA}: A New Benchmark for Abstract Diagram Understanding and Visual Language Reasoning}, author = {Lu, Pan and Qiu, Liang and Chen, Jiaqi and Xia, Tony and Zhao, Yizhou and Zhang, Wei and Yu, Zhou and Liang, Xiaodan and Zhu, Song-Chun}, booktitle = {NeurIPS}, year = {2021} }
@article{winata2024worldcuisines, title = {{WorldCuisines}: A Massive-Scale Benchmark for Multilingual and Multicultural Visual Question Answering on Global Cuisines}, author = {Winata, Genta Indra and Hudi, Frederikus and Irawan, Patrick Amadeus and Putri, Rifki Afina and Wang, Yutong and Nohejl, Adam and Prathama, Ubaidillah Ariq and Ousidhoum, Nedjma and Amriani, Afifa and others}, journal = {arXiv preprint arXiv:2410.12705}, year = {2024} }
% ========================= % DRAKE unseen-task datasets % =========================
@article{fu2023dreamsim, title = {{DreamSim}: Learning New Dimensions of Human Visual Similarity using Synthetic Data}, author = {Fu, Stephanie and Tamir, Netanel and Sundaram, Shobhita and Chai, Lucy and Zhang, Richard and Dekel, Tali and Isola, Phillip}, journal = {arXiv preprint arXiv:2306.09344}, year = {2023} }
@article{jiang2024mantis, title = {{Mantis}: Interleaved Multi-Image Instruction Tuning}, author = {Jiang, Dongfu and He, Xuan and Zeng, Huaye and Wei, Cong and Ku, Max and Liu, Qian and Chen, Wenhu}, journal = {Transactions on Machine Learning Research}, year = {2024} }
@article{yu2022coca, title = {{CoCa}: Contrastive Captioners Are Image-Text Foundation Models}, author = {Yu, Jiahui and Wang, Zirui and Vasudevan, Vijay and Yeung, Legg and Seyedhosseini, Mojtaba and Wu, Yonghui}, journal = {arXiv preprint arXiv:2205.01917}, year = {2022} }
@inproceedings{krojer2022imagecode, title = {Image Retrieval from Contextual Descriptions}, author = {Krojer, Benno and Adlakha, Vaibhav and Vineet, Vibhav and Goyal, Yash and Ponti, Edoardo and Reddy, Siva}, booktitle = {ACL}, year = {2022} }
@inproceedings{yagcioglu2018recipeqa, title = {{RecipeQA}: A Challenge Dataset for Multimodal Comprehension of Cooking Recipes}, author = {Yagcioglu, Semih and Erdem, Aykut and Erdem, Erkut and Ikizler-Cinbis, Nazli}, booktitle = {EMNLP}, year = {2018} }
@inproceedings{hui2025hqedit, title = {{HQ-Edit}: A High-Quality Dataset for Instruction-Based Image Editing}, author = {Hui, Mude and Yang, Siwei and Zhao, Bingchen and Shi, Yichun and Wang, Heng and Wang, Peng and Xie, Cihang and Zhou, Yuyin}, booktitle = {ICLR}, year = {2025} }
How to Download
We highly recommend downloading each dataset (.tar) file separately:
# Example: Download Fashion200K
huggingface-cli download SNUMPR/DRAKE Fashion200K.tar --local-dir ./ --repo-type dataset
# Example: Download Bongard-HOI
huggingface-cli download SNUMPR/DRAKE Bongard-HOI.tar --local-dir ./ --repo-type dataset
After downloading, extract each archive:
tar -xvf Fashion200K.tar
# Repeat for other archives
Place extracted data under the dataset/ folder in the code repository, following the structure described in the README.
Dataset Structure
Once all archives are extracted, the folder layout will look like this:
dataset/
├── Fashion200K/
│ ├── full/images/ # raw image files
│ ├── train/
│ │ ├── dataset-0.json # task split 0
│ │ ├── dataset-1.json # task split 1
│ │ └── ...
│ └── test/
│ ├── dataset-0.json
│ ├── dataset-1.json
│ └── ...
└── <other_datasets>/
├── <image_folder>/
├── train/
│ ├── dataset-0.json
│ └── ...
└── test/
├── dataset-0.json
└── ...
Citation
If you use DRAKE in your research, please cite our paper:
@inproceedings{seo2026colora,
title = {Co-LoRA: Collaborative Model Personalization on Heterogeneous Multi-Modal Clients},
author = {Seo, Minhyuk and Kim, Taeheon and Lee, Hankook and Choi, Jonghyun and Tuytelaars, Tinne},
booktitle = {The Fourteenth International Conference on Learning Representations (ICLR)},
year = {2026},
url = {https://openreview.net/forum?id=0g5Dk4Qfh0}
}
We are grateful to all the researchers and communities who created and maintained the original datasets that make DRAKE possible. 🙏
- Downloads last month
- 59

