| --- |
| dataset_info: |
| features: |
| - name: source |
| dtype: image |
| - name: mask |
| dtype: image |
| - name: target |
| dtype: image |
| - name: caption |
| dtype: string |
| - name: category |
| dtype: string |
| splits: |
| - name: train |
| num_examples: 89927 |
| - name: validation |
| num_examples: 4989 |
| - name: test |
| num_examples: 5009 |
| license: cc-by-nc-4.0 |
| task_categories: |
| - image-to-image |
| tags: |
| - virtual-try-on |
| - fashion |
| - clothing |
| --- |
| |
| # OpenVTON |
| A large-scale virtual try-on dataset containing ~100K clothing image pairs with garment masks. |
| You |
|
|
| ## Dataset Structure |
|
|
| Each sample contains: |
| - **source**: Garment image (clothing item) |
| - **mask**: Garment segmentation mask |
| - **target**: Person wearing the garment (ground truth) |
| - **caption**: Text description of the clothing |
| - **category**: Clothing category (e.g., pants, jeans, shirt) |
|
|
| ## Splits |
|
|
| | Split | Samples | |
| |-------|---------| |
| | Train | 89,927 | |
| | Validation | 4,989 | |
| | Test | 5,009 | |
| | **Total** | **99,925** | |
|
|
| ## Usage |
|
|
| ```python |
| from datasets import load_dataset |
| |
| dataset = load_dataset("RenxingIntelligence/OpenVTON") |
| sample = dataset["train"][0] |
| sample["source"].show() # garment image |
| sample["mask"].show() # segmentation mask |
| sample["target"].show() # person wearing garment |
| print(sample["caption"]) |
| print(sample["category"]) |
| ``` |
| ## Benchmark and Paper |
|
|
| This dataset is part of **OpenVTON-Bench**, a large-scale benchmark designed for the systematic evaluation of controllable virtual try-on (VTON) models. |
|
|
| **OpenVTON-Bench** is introduced in our paper: |
|
|
| > **OpenVTON-Bench: A Large-Scale Benchmark for Controllable Virtual Try-On** |
| > 📄 Paper: [https://arxiv.org/abs/2601.22725](https://arxiv.org/abs/2601.22725) |
| > 💻 Code: [https://github.com/RenxingIntelligence/OpenVTON-Bench](https://github.com/RenxingIntelligence/OpenVTON-Bench) |
|
|
| OpenVTON-Bench provides a standardized evaluation protocol for modern diffusion-based and transformer-based virtual try-on systems, enabling fair and reproducible comparison across different architectures. |
|
|
| --- |
|
|
| ## About OpenVTON-Bench |
|
|
| **OpenVTON-Bench** is a **large-scale, high-resolution benchmark** designed for the **systematic evaluation of controllable virtual try-on models**. |
|
|
| Unlike existing datasets and evaluation protocols that struggle with texture details and semantic consistency, OpenVTON-Bench provides: |
|
|
| * 🖼️ **~100K Image Pairs** with resolutions up to **1536×1536**, enabling evaluation of fine-grained texture generation. |
| * 🏷️ **Fine-Grained Taxonomy** covering **20 garment categories** for balanced semantic evaluation. |
| * 📐 **Multi-Level Automated Evaluation**, including: |
|
|
| * Pixel fidelity |
| * Garment consistency |
| * Semantic realism |
|
|
| This benchmark enables **fair, reproducible, and scalable comparison** across modern virtual try-on systems. |
|
|
| --- |
|
|
| ## Citation |
| If you use this dataset or the benchmark in your research, please cite: |
| ```bibtex |
| @misc{li2026openvtonbenchlargescalehighresolutionbenchmark, |
| title={OpenVTON-Bench: A Large-Scale High-Resolution Benchmark for Controllable Virtual Try-On Evaluation}, |
| author={Jin Li and Tao Chen and Shuai Jiang and Weijie Wang and Jingwen Luo and Chenhui Wu}, |
| year={2026}, |
| eprint={2601.22725}, |
| archivePrefix={arXiv}, |
| primaryClass={cs.CV}, |
| url={https://arxiv.org/abs/2601.22725}, |
| } |
| ``` |
|
|
|
|