| license: cc-by-sa-4.0 | |
| task_categories: | |
| - image-text-to-text | |
| # Visual-RAG-ME | |
| [**Project Page**](https://xiaowu0162.github.io/visret/) | [**Paper**](https://huggingface.co/papers/2505.20291) | [**GitHub**](https://github.com/xiaowu0162/visualize-then-retrieve) | |
| Official data for **Visual-RAG-ME**, a benchmark for multi-entity text-to-image retrieval and visual question answering (VQA). This dataset was introduced in the paper [VisRet: Visualization Improves Knowledge-Intensive Text-to-Image Retrieval](https://huggingface.co/papers/2505.20291). | |
| ## Dataset Description | |
| Visual-RAG-ME is a new benchmark annotated for comparing features across related organisms. It is designed to evaluate models on two primary tasks: | |
| 1. **Multi-entity Text-to-Image Retrieval**: Navigating structured visual relationships such as pose and viewpoint in knowledge-intensive scenarios. | |
| 2. **Visual Question Answering (VQA)**: Assessing the model's ability to answer questions based on retrieved visual information. | |
| The benchmark highlights the limitations of traditional cross-modal similarity alignment and supports the **Visualize-then-Retrieve (VisRet)** paradigm, which improves retrieval by projecting textual queries into the image modality via generation. | |
| ## Citation | |
| If you find this dataset useful, please cite the following paper: | |
| ```bibtex | |
| @article{wu2025visret, | |
| title={VisRet: Visualization Improves Knowledge-Intensive Text-to-Image Retrieval}, | |
| author={Wu, Di and Wan, Yixin and Chang, Kai-Wei}, | |
| journal={arXiv preprint arXiv:2505.20291}, | |
| year={2025} | |
| } | |
| ``` |