Datasets:

Modalities:
Text
Formats:
json
Size:
< 1K
ArXiv:
Libraries:
Datasets
pandas
License:
File size: 1,575 Bytes
107faba
 
c0b2093
 
107faba
 
c0b2093
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
---
license: cc-by-sa-4.0
task_categories:
- image-text-to-text
---

# Visual-RAG-ME

[**Project Page**](https://xiaowu0162.github.io/visret/) | [**Paper**](https://huggingface.co/papers/2505.20291) | [**GitHub**](https://github.com/xiaowu0162/visualize-then-retrieve)

Official data for **Visual-RAG-ME**, a benchmark for multi-entity text-to-image retrieval and visual question answering (VQA). This dataset was introduced in the paper [VisRet: Visualization Improves Knowledge-Intensive Text-to-Image Retrieval](https://huggingface.co/papers/2505.20291).

## Dataset Description

Visual-RAG-ME is a new benchmark annotated for comparing features across related organisms. It is designed to evaluate models on two primary tasks:
1. **Multi-entity Text-to-Image Retrieval**: Navigating structured visual relationships such as pose and viewpoint in knowledge-intensive scenarios.
2. **Visual Question Answering (VQA)**: Assessing the model's ability to answer questions based on retrieved visual information.

The benchmark highlights the limitations of traditional cross-modal similarity alignment and supports the **Visualize-then-Retrieve (VisRet)** paradigm, which improves retrieval by projecting textual queries into the image modality via generation.

## Citation

If you find this dataset useful, please cite the following paper:

```bibtex
@article{wu2025visret,
  title={VisRet: Visualization Improves Knowledge-Intensive Text-to-Image Retrieval},
  author={Wu, Di and Wan, Yixin and Chang, Kai-Wei},
  journal={arXiv preprint arXiv:2505.20291},
  year={2025}
}
```