aleversn commited on
Commit
041c531
·
verified ·
1 Parent(s): a152d97

Upload Readme.md

Browse files
Files changed (1) hide show
  1. Readme.md +199 -0
Readme.md ADDED
@@ -0,0 +1,199 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <p align="center">
2
+ <img src="./docs/assets/logo.svg" alt="Logo" width="120" />
3
+ <p align="center">
4
+ <a href="https://github.com/PKU-DAIR">
5
+ <img alt="Static Badge" src="https://img.shields.io/badge/%C2%A9-PKU--DAIR-%230e529d?labelColor=%23003985">
6
+ </a>
7
+ </p>
8
+ </p>
9
+
10
+ ## **WebRenderBench: Enhancing Web Interface Generation through Layout-Style Consistency and Reinforcement Learning**
11
+
12
+ [Paper](https://arxiv.org/pdf/2510.04097) | [中文](./docs/Chinese.md)
13
+
14
+ ## **🔍 Overview**
15
+
16
+ **WebRenderBench** is a large-scale benchmark designed to advance **WebUI-to-Code** research for multimodal large language models (MLLMs) through evaluation on real-world webpages. It provides:
17
+
18
+ * **45,100** real webpages collected from public portal websites
19
+ * **High diversity and complexity**, covering a wide range of industries and design styles
20
+ * **Novel evaluation metrics** that quantify **layout and style consistency** based on rendered pages
21
+ * The **ALISA reinforcement learning framework**, which uses the new metrics as reward signals to optimize generation quality
22
+
23
+ ---
24
+
25
+ ## **🚀 Key Features**
26
+
27
+ ### **Beyond the Limitations of Traditional Benchmarks**
28
+
29
+ WebRenderBench addresses the core issues of existing WebUI-to-Code benchmarks in data quality and evaluation methodology:
30
+
31
+ | Aspect | Traditional Benchmarks | Advantages of WebRenderBench |
32
+ | :------------------------- | :---------------------------------------------------------------------------------------------- | :------------------------------------------------------------------------------------------- |
33
+ | **Data Quality** | Small-scale, simple-structured, or LLM-synthesized data with limited diversity | Large-scale, real-world, and structurally complex webpages that present higher challenges |
34
+ | **Evaluation Reliability** | Relies on visual APIs (high cost) or code-structure comparison (fails to handle code asymmetry) | Objectively and efficiently evaluates layout and style consistency based on rendered results |
35
+ | **Training Effectiveness** | Difficult to optimize on crawled data with asymmetric code structures | Proposed metrics can be directly used as RL reward signals to enhance model optimization |
36
+
37
+ ---
38
+
39
+ ### **Dataset Characteristics**
40
+
41
+ <p align="center">
42
+ <img src="./docs/assets/framework.svg" alt="WebRenderBench and ALISA Framework" width="80%" />
43
+ </p>
44
+ <p align="center"><i>Figure 1: Dataset construction pipeline and the ALISA framework</i></p>
45
+
46
+ Our dataset is constructed through a systematic process to ensure both **high quality** and **diversity**:
47
+
48
+ 1. **Data Collection**: URLs are obtained from open enterprise portal datasets. A high-concurrency crawler captures 210K webpages along with static resources.
49
+ 2. **Data Processing**: MHTML pages are converted into HTML files, and cross-domain resources are processed to ensure local renderability and full-page screenshots.
50
+ 3. **Data Cleaning**: Pages with abnormal sizes, rendering errors, or missing styles are filtered out. Multimodal QA models further remove low-quality samples with large blank areas or overlapping elements, yielding 110K valid pages.
51
+ 4. **Data Categorization**: Pages are categorized by industry and complexity (measured via *Group Count*) to ensure balanced distribution across difficulty levels and domains.
52
+
53
+ Finally, we construct a dataset of **45.1K** samples, evenly split into training and test sets.
54
+
55
+ ---
56
+
57
+ ## **🌟 Evaluation Framework**
58
+
59
+ We propose a novel evaluation protocol based on **rendered webpages**, quantifying model performance along two key dimensions: **layout** and **style consistency**.
60
+
61
+ ---
62
+
63
+ ### **RDA (Relative Layout Difference of Associated Elements)**
64
+
65
+ **Purpose:** Measures relative layout differences between matched elements.
66
+
67
+ * **Element Association:** Matches corresponding elements between generated and target pages using text similarity (LCS) and geometric distance.
68
+ * **Positional Deviation:** The page is divided into a 3×3 grid. Associated elements are compared quadrant-wise—if located in different quadrants, the score is 0; otherwise, a deviation-based score is computed.
69
+ * **Uniqueness Weighting:** Each element is weighted by its uniqueness (inverse group size), giving higher importance to distinctive components.
70
+
71
+ ---
72
+
73
+ ### **GDA (Group-wise Difference in Element Counts)**
74
+
75
+ **Purpose:** Measures group-level alignment of axis-aligned elements.
76
+
77
+ * **Grouping:** Elements aligned on the same horizontal or vertical axis are treated as one group.
78
+ * **Count Comparison:** Compares whether corresponding groups in the generated and target pages contain the same number of elements.
79
+ * **Uniqueness Weighting:** Weighted by element uniqueness to emphasize key structural alignment.
80
+
81
+ ---
82
+
83
+ ### **SDA (Style Difference of Associated Elements)**
84
+
85
+ **Purpose:** Evaluates fine-grained style differences between associated elements.
86
+
87
+ * **Multi-Dimensional Style Extraction:** Measures differences in foreground color, background color, font size, and border radius.
88
+ * **Weighted Averaging:** Computes a weighted mean of style similarity scores across all associated elements to obtain an overall style score.
89
+
90
+ ---
91
+
92
+ ## **⚙️ Installation Guide**
93
+
94
+ ### **Core Dependencies**
95
+
96
+ <!--
97
+ # Recommended: Use vLLM for faster inference
98
+ pip install vllm transformers>=4.40.0 torch>=2.0
99
+
100
+ # Other dependencies
101
+ pip install selenium pandas scikit-learn pillow
102
+
103
+ Alternatively:
104
+ pip install -r requirements.txt
105
+ -->
106
+
107
+ Coming Soon
108
+
109
+ ---
110
+
111
+ ## **📊 Benchmark Workflow**
112
+
113
+ ### **Directory Structure**
114
+
115
+ ```
116
+ |- docs/ # Documentation
117
+ |- scripts # Evaluation scripts
118
+ |- web_render_test.jsonl # Test set metadata
119
+ |- web_render_train.jsonl # Training set metadata
120
+ |- test_webpages.zip # Test set webpages
121
+ |- train_webpages.zip # Training set webpages
122
+ |- test_screenshots.zip # Test set screenshots
123
+ |- train_screenshots.zip # Training set screenshots
124
+ ```
125
+
126
+ ---
127
+
128
+ ### **Implementation Steps**
129
+
130
+ 1. **Data Preparation**
131
+
132
+ * Download the WebRenderBench dataset and extract webpage and screenshot archives.
133
+ * Each pair consists of a real webpage (HTML + resources) and its rendered screenshot.
134
+
135
+ 2. **Model Inference**
136
+
137
+ * Run inference using engines such as **vLLM** or **LLM Deploy**, and save results to the designated directory.
138
+
139
+ 3. **Evaluation**
140
+
141
+ * Run `scripts/1_get_evaluation.py`.
142
+ * The script launches a web server to render both generated and target HTML.
143
+ * WebDriver extracts DOM information and computes **RDA**, **GDA**, and **SDA** scores.
144
+ * Results are saved under `save_results/`.
145
+ * Final scores are aggregated via `scripts/2_compute_alisa_scores.py`.
146
+
147
+ 4. **ALISA Training (Optional)**
148
+
149
+ * Use `models/train_rl.py` for reinforcement learning fine-tuning. *(Coming Soon)*
150
+ * The computed evaluation scores serve as reward signals to optimize policy models via methods such as **GRPO**.
151
+
152
+ ---
153
+
154
+ ## **📈 Model Performance Insights**
155
+
156
+ We evaluate **17 multimodal large language models** of varying scales and architectures (both open- and closed-source).
157
+
158
+ * **Combined Scores of RDA, GDA, and SDA (%)**
159
+
160
+ ![Inference Results](./docs/assets/inference_results.png)
161
+
162
+ **Key Findings:**
163
+
164
+ * Overall, larger models achieve higher consistency. **GPT-4.1-mini** and **Qwen-VL-Plus** perform best among closed-source models.
165
+ * While most models perform reasonably on simple pages (*Group Count* < 50), **RDA scores drop sharply** as page complexity increases—precise layout alignment remains a major challenge.
166
+ * After reinforcement learning via the **ALISA framework**, **Qwen2.5-VL-7B** shows substantial improvements across all complexity levels, even surpassing **GPT-4.1-mini** on simpler cases.
167
+
168
+ ---
169
+
170
+ ## **📅 Future Work**
171
+
172
+ * [ ] Release pretrained models fine-tuned with the ALISA framework
173
+ * [ ] Expand dataset coverage to more industries and dynamic interaction patterns
174
+ * [ ] Open-source the complete toolchain for data collection, cleaning, and evaluation
175
+
176
+ ---
177
+
178
+ ## **📜 License**
179
+
180
+ The **WebRenderBench dataset** is released for **research purposes only**.
181
+ All accompanying code will be published under the **Apache License 2.0**.
182
+
183
+ All webpages in the dataset are collected from publicly accessible enterprise portals.
184
+ To protect privacy, all personal and sensitive information has been removed or modified.
185
+
186
+ ---
187
+
188
+ ## **📚 Citation**
189
+
190
+ If you use our dataset or framework in your research, please cite the following paper:
191
+
192
+ ```bibtex
193
+ @article{webrenderbench2025,
194
+ title={WebRenderBench: Enhancing Web Interface Generation through Layout-Style Consistency and Reinforcement Learning},
195
+ author={Anonymous Author(s)},
196
+ year={2025},
197
+ journal={arXiv preprint},
198
+ }
199
+ ```