metadata
license: cc-by-sa-4.0
task_categories:
- image-text-to-text
Visual-RAG-ME
Project Page | Paper | GitHub
Official data for Visual-RAG-ME, a benchmark for multi-entity text-to-image retrieval and visual question answering (VQA). This dataset was introduced in the paper VisRet: Visualization Improves Knowledge-Intensive Text-to-Image Retrieval.
Dataset Description
Visual-RAG-ME is a new benchmark annotated for comparing features across related organisms. It is designed to evaluate models on two primary tasks:
- Multi-entity Text-to-Image Retrieval: Navigating structured visual relationships such as pose and viewpoint in knowledge-intensive scenarios.
- Visual Question Answering (VQA): Assessing the model's ability to answer questions based on retrieved visual information.
The benchmark highlights the limitations of traditional cross-modal similarity alignment and supports the Visualize-then-Retrieve (VisRet) paradigm, which improves retrieval by projecting textual queries into the image modality via generation.
Citation
If you find this dataset useful, please cite the following paper:
@article{wu2025visret,
title={VisRet: Visualization Improves Knowledge-Intensive Text-to-Image Retrieval},
author={Wu, Di and Wan, Yixin and Chang, Kai-Wei},
journal={arXiv preprint arXiv:2505.20291},
year={2025}
}