|
|
--- |
|
|
license: cc-by-4.0 |
|
|
configs: |
|
|
- config_name: Element_Classification |
|
|
data_files: |
|
|
- split: test |
|
|
path: Element_Classification/test-* |
|
|
- config_name: Attribute_Regconition |
|
|
data_files: |
|
|
- split: test |
|
|
path: Attribute_Regconition/test-* |
|
|
- config_name: Visual_Grounding |
|
|
data_files: |
|
|
- split: test |
|
|
path: Visual_Grounding/test-* |
|
|
- config_name: OCR |
|
|
data_files: |
|
|
- split: test |
|
|
path: OCR/test-* |
|
|
- config_name: Code_Error_Correction |
|
|
data_files: |
|
|
- split: test |
|
|
path: Code_Error_Correction/test-* |
|
|
- config_name: Code_Function_Editing |
|
|
data_files: |
|
|
- split: test |
|
|
path: Code_Function_Editing/test-* |
|
|
- config_name: Webpage_HTML_Matching |
|
|
data_files: |
|
|
- split: test |
|
|
path: Webpage_HTML_Matching/test-* |
|
|
- config_name: Webpage_HTMl_Retrieval |
|
|
data_files: |
|
|
- split: test |
|
|
path: Webpage_HTML_Retrieval/test-* |
|
|
dataset_info: |
|
|
- config_name: Element_Classification |
|
|
features: |
|
|
- name: id |
|
|
dtype: string |
|
|
- name: question |
|
|
dtype: string |
|
|
- name: image_id |
|
|
dtype: string |
|
|
- name: image |
|
|
dtype: image |
|
|
- name: answer |
|
|
dtype: string |
|
|
- name: subtask |
|
|
dtype: string |
|
|
splits: |
|
|
- name: test |
|
|
num_bytes: 442962174 |
|
|
num_examples: 950 |
|
|
download_size: 442962174 |
|
|
dataset_size: 442962174 |
|
|
- config_name: Attribute_Regconition |
|
|
features: |
|
|
- name: id |
|
|
dtype: string |
|
|
- name: question |
|
|
dtype: string |
|
|
- name: image_id |
|
|
dtype: string |
|
|
- name: image |
|
|
dtype: image |
|
|
- name: answer |
|
|
dtype: string |
|
|
- name: subtask |
|
|
dtype: string |
|
|
splits: |
|
|
- name: test |
|
|
num_bytes: 1679258113 |
|
|
num_examples: 3718 |
|
|
download_size: 1679258113 |
|
|
dataset_size: 1679258113 |
|
|
- config_name: Visual_Grounding |
|
|
features: |
|
|
- name: id |
|
|
dtype: string |
|
|
- name: question |
|
|
dtype: string |
|
|
- name: image_id |
|
|
dtype: string |
|
|
- name: image |
|
|
dtype: image |
|
|
- name: answer |
|
|
dtype: string |
|
|
- name: subtask |
|
|
dtype: string |
|
|
splits: |
|
|
- name: test |
|
|
num_bytes: 1897962456 |
|
|
num_examples: 3934 |
|
|
download_size: 1897962456 |
|
|
dataset_size: 1897962456 |
|
|
- config_name: OCR |
|
|
features: |
|
|
- name: id |
|
|
dtype: string |
|
|
- name: question |
|
|
dtype: string |
|
|
- name: image_id |
|
|
dtype: string |
|
|
- name: image |
|
|
dtype: image |
|
|
- name: answer |
|
|
dtype: string |
|
|
- name: target_[x1,y1,x2,y2] |
|
|
dtype: string |
|
|
- name: subtask |
|
|
dtype: string |
|
|
splits: |
|
|
- name: test |
|
|
num_bytes: 1147237990 |
|
|
num_examples: 2460 |
|
|
download_size: 1147237990 |
|
|
dataset_size: 1147237990 |
|
|
- config_name: Code_Error_Correction |
|
|
features: |
|
|
- name: id |
|
|
dtype: string |
|
|
- name: question |
|
|
dtype: string |
|
|
- name: code_with_error |
|
|
dtype: string |
|
|
- name: answer |
|
|
dtype: string |
|
|
- name: subtask |
|
|
dtype: string |
|
|
splits: |
|
|
- name: test |
|
|
num_bytes: 2885440 |
|
|
num_examples: 2635 |
|
|
download_size: 2885440 |
|
|
dataset_size: 2885440 |
|
|
- config_name: Code_Function_Editing |
|
|
features: |
|
|
- name: id |
|
|
dtype: string |
|
|
- name: question |
|
|
dtype: string |
|
|
- name: function_description |
|
|
dtype: string |
|
|
- name: answer |
|
|
dtype: string |
|
|
- name: subtask |
|
|
dtype: string |
|
|
splits: |
|
|
- name: test |
|
|
num_bytes: 2712168 |
|
|
num_examples: 2290 |
|
|
download_size: 2712168 |
|
|
dataset_size: 2712168 |
|
|
- config_name: Webpage_HTML_Matching |
|
|
features: |
|
|
- name: id |
|
|
dtype: string |
|
|
- name: question |
|
|
dtype: string |
|
|
- name: image_id |
|
|
dtype: string |
|
|
- name: image |
|
|
dtype: image |
|
|
- name: answer |
|
|
dtype: string |
|
|
- name: subtask |
|
|
dtype: string |
|
|
splits: |
|
|
- name: test |
|
|
num_bytes: 1003289265 |
|
|
num_examples: 2143 |
|
|
download_size: 1003289265 |
|
|
dataset_size: 1003289265 |
|
|
- config_name: Webpage_HTML_Retrieval |
|
|
features: |
|
|
- name: id |
|
|
dtype: string |
|
|
- name: question |
|
|
dtype: string |
|
|
- name: image_id |
|
|
dtype: string |
|
|
- name: image |
|
|
dtype: image |
|
|
- name: answer |
|
|
dtype: string |
|
|
- name: subtask |
|
|
dtype: string |
|
|
splits: |
|
|
- name: test |
|
|
num_bytes: 1109887493 |
|
|
num_examples: 2345 |
|
|
download_size: 1109887493 |
|
|
dataset_size: 1109887493 |
|
|
--- |
|
|
|
|
|
# WebUIBench |
|
|
|
|
|
Dataset for the paper: [WebUIBench: A Comprehensive Benchmark for Evaluating Multimodal Large Language Models in WebUI-to-Code](https://arxiv.org/abs/2404.05955) |
|
|
|
|
|
🏠 [Homepage](https://github.com/MAIL-Tele-AI/WebUIBench) | [**📖 arXiv**](https://arxiv.org/abs/2404.05955) |
|
|
|
|
|
## Introduction |
|
|
|
|
|
<!--  --> |
|
|
|
|
|
We introduce WebUIBench, a large-scale and comprehensive benchmark designed to evaluate the WebUI-to-Code capabilities of Multimodal Large Language Models (MLLMs). WebUIBench comprises over **21K question-answer pairs** derived from more than **0.7K real-world websites**, encompassing **9 distinct subtasks**. We conducted extensive experiments on 7 state-of-the-art closed-source and 22 prominent open-source MLLMs. Our key findings highlight the models' deficiencies in webpage generation tasks across various dimensions, including cross-modality reasoning, element localization, and webpage layout generation. |
|
|
|
|
|
|
|
|
## Contact |
|
|
- Zhiyu Lin: [zyllin@bjtu.edu.cn](zyllin@bjtu.edu.cn) |
|
|
- Zhengda Zhou: [zhengdazhou@smail.nju.edu.cn](zhengdazhou@smail.nju.edu.cn) |
|
|
- Zhiyuan Zhao: [tuzixini@gmail.com](tuzixini@gmail.com) |
|
|
|
|
|
# 🚩Citation |
|
|
|
|
|
If you find this work is helpful, please kindly cite as follows. Thanks ! |
|
|
|
|
|
```bibtex |
|
|
@article{webuibench, |
|
|
title={WebUIBench: A Comprehensive Benchmark for Evaluating Multimodal Large Language Models in WebUI-to-Code}, |
|
|
author={Zhiyu Lin, Zhengda Zhou, Zhiyuan Zhao, Tianrui Wan, Yilun Ma, Junyu Gao, XueLong Li}, |
|
|
journal={arXiv preprint arXiv:xx}, |
|
|
year={2025} |
|
|
} |
|
|
``` |
|
|
|