Datasets:
Enhance OOD-Eval dataset card: Add task category, links, abstract, and usage example
#2
by
nielsr
HF Staff
- opened
README.md
CHANGED
|
@@ -1,3 +1,59 @@
|
|
| 1 |
-
---
|
| 2 |
-
license: cc-by-nd-4.0
|
| 3 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
license: cc-by-nd-4.0
|
| 3 |
+
task_categories:
|
| 4 |
+
- text-to-3d
|
| 5 |
+
tags:
|
| 6 |
+
- 3d
|
| 7 |
+
- benchmark
|
| 8 |
+
- out-of-domain
|
| 9 |
+
- evaluation
|
| 10 |
+
---
|
| 11 |
+
|
| 12 |
+
# OOD-Eval: Out-of-Domain Evaluation Prompts for Text-to-3D
|
| 13 |
+
|
| 14 |
+
This repository contains the **OOD-Eval** dataset, a new collection of challenging out-of-domain (OOD) prompts specifically designed to facilitate rigorous evaluation of text-to-3D generation models. It was introduced in the paper [MV-RAG: Retrieval Augmented Multiview Diffusion](https://huggingface.co/papers/2508.16577).
|
| 15 |
+
|
| 16 |
+
This dataset helps assess how well text-to-3D approaches perform on rare or novel concepts, addressing a limitation where models often struggle to produce consistent or accurate results for such inputs.
|
| 17 |
+
|
| 18 |
+
* **Paper:** [MV-RAG: Retrieval Augmented Multiview Diffusion](https://huggingface.co/papers/2508.16577)
|
| 19 |
+
* **Project Page:** https://yosefdayani.github.io/MV-RAG/
|
| 20 |
+
* **Code:** https://github.com/yosefdayani/MV-RAG
|
| 21 |
+
|
| 22 |
+
## Paper Abstract
|
| 23 |
+
|
| 24 |
+
Text-to-3D generation approaches have advanced significantly by leveraging pretrained 2D diffusion priors, producing high-quality and 3D-consistent outputs. However, they often fail to produce out-of-domain (OOD) or rare concepts, yielding inconsistent or inaccurate results. To this end, we propose MV-RAG, a novel text-to-3D pipeline that first retrieves relevant 2D images from a large in-the-wild 2D database and then conditions a multiview diffusion model on these images to synthesize consistent and accurate multiview outputs. Training such a retrieval-conditioned model is achieved via a novel hybrid strategy bridging structured multiview data and diverse 2D image collections. This involves training on multiview data using augmented conditioning views that simulate retrieval variance for view-specific reconstruction, alongside training on sets of retrieved real-world 2D images using a distinctive held-out view prediction objective: the model predicts the held-out view from the other views to infer 3D consistency from 2D data. To facilitate a rigorous OOD evaluation, we introduce a new collection of challenging OOD prompts. Experiments against state-to-the-art text-to-3D, image-to-3D, and personalization baselines show that our approach significantly improves 3D consistency, photorealism, and text adherence for OOD/rare concepts, while maintaining competitive performance on standard benchmarks.
|
| 25 |
+
|
| 26 |
+
## Sample Usage
|
| 27 |
+
|
| 28 |
+
You can prompt the MV-RAG model (which leverages this dataset for evaluation) on your retrieved local images by:
|
| 29 |
+
|
| 30 |
+
```bash
|
| 31 |
+
python main.py \
|
| 32 |
+
--prompt "Cadillac 341 automobile car" \
|
| 33 |
+
--retriever simple \
|
| 34 |
+
--folder_path "assets/Cadillac 341 automobile car" \
|
| 35 |
+
--seed 0 \
|
| 36 |
+
--k 4 \
|
| 37 |
+
--azimuth_start 45 # or 0 for front view
|
| 38 |
+
```
|
| 39 |
+
|
| 40 |
+
To see all command options run
|
| 41 |
+
```bash
|
| 42 |
+
python main.py --help
|
| 43 |
+
```
|
| 44 |
+
|
| 45 |
+
## Citation
|
| 46 |
+
|
| 47 |
+
If you use this benchmark or the MV-RAG model in your research, please cite:
|
| 48 |
+
|
| 49 |
+
``` bibtex
|
| 50 |
+
@misc{dayani2025mvragretrievalaugmentedmultiview,
|
| 51 |
+
title={MV-RAG: Retrieval Augmented Multiview Diffusion},
|
| 52 |
+
author={Yosef Dayani and Omer Benishu and Sagie Benaim},
|
| 53 |
+
year={2025},
|
| 54 |
+
eprint={2508.16577},
|
| 55 |
+
archivePrefix={arXiv},
|
| 56 |
+
primaryClass={cs.CV},
|
| 57 |
+
url={https://arxiv.org/abs/2508.16577},
|
| 58 |
+
}
|
| 59 |
+
```
|