Add metadata, paper/GitHub links, and dataset description
#2
by
nielsr
HF Staff
- opened
README.md
CHANGED
|
@@ -1,5 +1,32 @@
|
|
| 1 |
---
|
| 2 |
license: cc-by-sa-4.0
|
|
|
|
|
|
|
| 3 |
---
|
| 4 |
|
| 5 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
---
|
| 2 |
license: cc-by-sa-4.0
|
| 3 |
+
task_categories:
|
| 4 |
+
- image-text-to-text
|
| 5 |
---
|
| 6 |
|
| 7 |
+
# Visual-RAG-ME
|
| 8 |
+
|
| 9 |
+
[**Project Page**](https://xiaowu0162.github.io/visret/) | [**Paper**](https://huggingface.co/papers/2505.20291) | [**GitHub**](https://github.com/xiaowu0162/visualize-then-retrieve)
|
| 10 |
+
|
| 11 |
+
Official data for **Visual-RAG-ME**, a benchmark for multi-entity text-to-image retrieval and visual question answering (VQA). This dataset was introduced in the paper [VisRet: Visualization Improves Knowledge-Intensive Text-to-Image Retrieval](https://huggingface.co/papers/2505.20291).
|
| 12 |
+
|
| 13 |
+
## Dataset Description
|
| 14 |
+
|
| 15 |
+
Visual-RAG-ME is a new benchmark annotated for comparing features across related organisms. It is designed to evaluate models on two primary tasks:
|
| 16 |
+
1. **Multi-entity Text-to-Image Retrieval**: Navigating structured visual relationships such as pose and viewpoint in knowledge-intensive scenarios.
|
| 17 |
+
2. **Visual Question Answering (VQA)**: Assessing the model's ability to answer questions based on retrieved visual information.
|
| 18 |
+
|
| 19 |
+
The benchmark highlights the limitations of traditional cross-modal similarity alignment and supports the **Visualize-then-Retrieve (VisRet)** paradigm, which improves retrieval by projecting textual queries into the image modality via generation.
|
| 20 |
+
|
| 21 |
+
## Citation
|
| 22 |
+
|
| 23 |
+
If you find this dataset useful, please cite the following paper:
|
| 24 |
+
|
| 25 |
+
```bibtex
|
| 26 |
+
@article{wu2025visret,
|
| 27 |
+
title={VisRet: Visualization Improves Knowledge-Intensive Text-to-Image Retrieval},
|
| 28 |
+
author={Wu, Di and Wan, Yixin and Chang, Kai-Wei},
|
| 29 |
+
journal={arXiv preprint arXiv:2505.20291},
|
| 30 |
+
year={2025}
|
| 31 |
+
}
|
| 32 |
+
```
|