Datasets:

Modalities:
Image
Text
Formats:
parquet
ArXiv:
Libraries:
Datasets
pandas
License:
File size: 5,640 Bytes
9d7cc00
 
 
427058a
 
 
 
4ea8f89
9d7cc00
4ea8f89
 
 
 
 
 
 
 
 
 
 
 
68cbc77
4ea8f89
 
d4194ea
68cbc77
4ea8f89
03bc057
c9047ab
68cbc77
03bc057
 
c9047ab
68cbc77
03bc057
9d7cc00
427058a
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
4ea8f89
9d7cc00
 
 
 
 
 
 
 
 
 
 
 
 
 
 
4ea8f89
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
d4194ea
 
 
 
 
 
 
 
 
 
 
4ea8f89
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
d4194ea
 
 
 
4ea8f89
 
 
 
03bc057
c9047ab
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
03bc057
c9047ab
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
9d7cc00
e0b48ad
 
 
 
 
 
 
 
 
2b6a1a1
a8d265e
2b6a1a1
e0b48ad
 
 
 
 
 
 
 
 
 
 
 
3c177b1
e0b48ad
3c177b1
e0b48ad
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
---
license: cc-by-4.0
configs:
- config_name: Element_Classification
  data_files:
  - split: test
    path: Element_Classification/test-*
- config_name: Attribute_Regconition
  data_files:
  - split: test
    path: Attribute_Regconition/test-*
- config_name: Visual_Grounding
  data_files:
  - split: test
    path: Visual_Grounding/test-*
- config_name: OCR
  data_files:
  - split: test
    path: OCR/test-*
- config_name: Code_Error_Correction
  data_files:
  - split: test
    path: Code_Error_Correction/test-*
- config_name: Code_Function_Editing
  data_files:
  - split: test
    path: Code_Function_Editing/test-*
- config_name: Webpage_HTML_Matching
  data_files:
  - split: test
    path: Webpage_HTML_Matching/test-*
- config_name: Webpage_HTMl_Retrieval
  data_files:
  - split: test
    path: Webpage_HTML_Retrieval/test-*
dataset_info:
- config_name: Element_Classification
  features:
  - name: id
    dtype: string
  - name: question
    dtype: string
  - name: image_id
    dtype: string
  - name: image
    dtype: image
  - name: answer
    dtype: string
  - name: subtask
    dtype: string
  splits:
  - name: test
    num_bytes: 442962174
    num_examples: 950
  download_size: 442962174
  dataset_size: 442962174
- config_name: Attribute_Regconition
  features:
  - name: id
    dtype: string
  - name: question
    dtype: string
  - name: image_id
    dtype: string
  - name: image
    dtype: image
  - name: answer
    dtype: string
  - name: subtask
    dtype: string
  splits:
  - name: test
    num_bytes: 1679258113
    num_examples: 3718
  download_size: 1679258113
  dataset_size: 1679258113
- config_name: Visual_Grounding
  features:
  - name: id
    dtype: string
  - name: question
    dtype: string
  - name: image_id
    dtype: string
  - name: image
    dtype: image
  - name: answer
    dtype: string
  - name: subtask
    dtype: string
  splits:
  - name: test
    num_bytes: 1897962456
    num_examples: 3934
  download_size: 1897962456
  dataset_size: 1897962456
- config_name: OCR
  features:
  - name: id
    dtype: string
  - name: question
    dtype: string
  - name: image_id
    dtype: string
  - name: image
    dtype: image
  - name: answer
    dtype: string
  - name: target_[x1,y1,x2,y2]
    dtype: string
  - name: subtask
    dtype: string
  splits:
  - name: test
    num_bytes: 1147237990
    num_examples: 2460
  download_size: 1147237990
  dataset_size: 1147237990
- config_name: Code_Error_Correction
  features:
  - name: id
    dtype: string
  - name: question
    dtype: string
  - name: code_with_error
    dtype: string
  - name: answer
    dtype: string
  - name: subtask
    dtype: string
  splits:
  - name: test
    num_bytes: 2885440
    num_examples: 2635
  download_size: 2885440
  dataset_size: 2885440
- config_name: Code_Function_Editing
  features:
  - name: id
    dtype: string
  - name: question
    dtype: string
  - name: function_description
    dtype: string
  - name: answer
    dtype: string
  - name: subtask
    dtype: string
  splits:
  - name: test
    num_bytes: 2712168
    num_examples: 2290
  download_size: 2712168
  dataset_size: 2712168
- config_name: Webpage_HTML_Matching
  features:
  - name: id
    dtype: string
  - name: question
    dtype: string
  - name: image_id
    dtype: string
  - name: image
    dtype: image
  - name: answer
    dtype: string
  - name: subtask
    dtype: string
  splits:
  - name: test
    num_bytes: 1003289265
    num_examples: 2143
  download_size: 1003289265
  dataset_size: 1003289265
- config_name: Webpage_HTML_Retrieval
  features:
  - name: id
    dtype: string
  - name: question
    dtype: string
  - name: image_id
    dtype: string
  - name: image
    dtype: image
  - name: answer
    dtype: string
  - name: subtask
    dtype: string
  splits:
  - name: test
    num_bytes: 1109887493
    num_examples: 2345
  download_size: 1109887493
  dataset_size: 1109887493
---

# WebUIBench

Dataset for the paper: [WebUIBench: A Comprehensive Benchmark for Evaluating Multimodal Large Language Models in WebUI-to-Code](https://arxiv.org/abs/2404.05955)

🏠 [Homepage](https://github.com/MAIL-Tele-AI/WebUIBench) | [**📖 arXiv**](https://arxiv.org/abs/2404.05955)

## Introduction

<!-- ![Task overview of WebUIBench from the WebUI Perception, HTML Programming, WebUI-HTML Understanding subtask and WebUI-to-Code task](https://github.com/MAIL-Tele-AI/WebUIBench/blob/main/imgs/overview.png)  -->

We introduce WebUIBench, a large-scale and comprehensive benchmark designed to evaluate the WebUI-to-Code capabilities of Multimodal Large Language Models (MLLMs). WebUIBench comprises over **21K question-answer pairs** derived from more than **0.7K real-world websites**, encompassing **9 distinct subtasks**. We conducted extensive experiments on 7 state-of-the-art closed-source and 22 prominent open-source MLLMs. Our key findings highlight the models' deficiencies in webpage generation tasks across various dimensions, including cross-modality reasoning, element localization, and webpage layout generation.


## Contact
- Zhiyu Lin: [zyllin@bjtu.edu.cn](zyllin@bjtu.edu.cn)
- Zhengda Zhou: [zhengdazhou@smail.nju.edu.cn](zhengdazhou@smail.nju.edu.cn)
- Zhiyuan Zhao: [tuzixini@gmail.com](tuzixini@gmail.com)

# 🚩Citation

If you find this work is helpful, please kindly cite as follows. Thanks !

```bibtex
@article{webuibench,
  title={WebUIBench: A Comprehensive Benchmark for Evaluating Multimodal Large Language Models in WebUI-to-Code},
  author={Zhiyu Lin, Zhengda Zhou, Zhiyuan Zhao, Tianrui Wan, Yilun Ma, Junyu Gao, XueLong Li},
  journal={arXiv preprint arXiv:xx},
  year={2025}
}
```