File size: 3,382 Bytes
b19d979
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
eb647fe
b19d979
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
a8f07c3
b19d979
 
 
 
 
 
 
eb647fe
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
82fdee6
eb647fe
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
---
dataset_info:
  features:
    - name: source
      dtype: image
    - name: mask
      dtype: image
    - name: target
      dtype: image
    - name: caption
      dtype: string
    - name: category
      dtype: string
  splits:
    - name: train
      num_examples: 89927
    - name: validation
      num_examples: 4989
    - name: test
      num_examples: 5009
license: cc-by-nc-4.0
task_categories:
  - image-to-image
tags:
  - virtual-try-on
  - fashion
  - clothing
---

# OpenVTON
A large-scale virtual try-on dataset containing ~100K clothing image pairs with garment masks.
You

## Dataset Structure

Each sample contains:
- **source**: Garment image (clothing item)
- **mask**: Garment segmentation mask
- **target**: Person wearing the garment (ground truth)
- **caption**: Text description of the clothing
- **category**: Clothing category (e.g., pants, jeans, shirt)

## Splits

| Split | Samples |
|-------|---------|
| Train | 89,927 |
| Validation | 4,989 |
| Test | 5,009 |
| **Total** | **99,925** |

## Usage

```python
from datasets import load_dataset

dataset = load_dataset("RenxingIntelligence/OpenVTON")
sample = dataset["train"][0]
sample["source"].show()  # garment image
sample["mask"].show()    # segmentation mask
sample["target"].show()  # person wearing garment
print(sample["caption"])
print(sample["category"])
```
## Benchmark and Paper

This dataset is part of **OpenVTON-Bench**, a large-scale benchmark designed for the systematic evaluation of controllable virtual try-on (VTON) models.

**OpenVTON-Bench** is introduced in our paper:

> **OpenVTON-Bench: A Large-Scale Benchmark for Controllable Virtual Try-On**
> 📄 Paper: [https://arxiv.org/abs/2601.22725](https://arxiv.org/abs/2601.22725)
> 💻 Code: [https://github.com/RenxingIntelligence/OpenVTON-Bench](https://github.com/RenxingIntelligence/OpenVTON-Bench)

OpenVTON-Bench provides a standardized evaluation protocol for modern diffusion-based and transformer-based virtual try-on systems, enabling fair and reproducible comparison across different architectures.

---

## About OpenVTON-Bench

**OpenVTON-Bench** is a **large-scale, high-resolution benchmark** designed for the **systematic evaluation of controllable virtual try-on models**.

Unlike existing datasets and evaluation protocols that struggle with texture details and semantic consistency, OpenVTON-Bench provides:

* 🖼️ **~100K Image Pairs** with resolutions up to **1536×1536**, enabling evaluation of fine-grained texture generation.
* 🏷️ **Fine-Grained Taxonomy** covering **20 garment categories** for balanced semantic evaluation.
* 📐 **Multi-Level Automated Evaluation**, including:

  * Pixel fidelity
  * Garment consistency
  * Semantic realism

This benchmark enables **fair, reproducible, and scalable comparison** across modern virtual try-on systems.

---

## Citation
If you use this dataset or the benchmark in your research, please cite:
```bibtex
@misc{li2026openvtonbenchlargescalehighresolutionbenchmark,
      title={OpenVTON-Bench: A Large-Scale High-Resolution Benchmark for Controllable Virtual Try-On Evaluation}, 
      author={Jin Li and Tao Chen and Shuai Jiang and Weijie Wang and Jingwen Luo and Chenhui Wu},
      year={2026},
      eprint={2601.22725},
      archivePrefix={arXiv},
      primaryClass={cs.CV},
      url={https://arxiv.org/abs/2601.22725}, 
}
```