File size: 8,019 Bytes
922458b
ebfc3ef
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
---
dataset_info:
  features:
  - name: src_html_path
    dtype: string
  - name: src_css_path
    dtype: string
  - name: web_type
    dtype: string
  - name: css_framework
    dtype: string
  - name: image_instruct
    dtype: image
  - name: modification_category
    dtype: string
  - name: style
    dtype: string
  - name: image_has_arrow
    dtype: bool
  - name: image_has_enclosure
    dtype: bool
  - name: image_has_ui_sketch
    dtype: bool
  - name: ref_html_path
    dtype: string
  - name: ref_css_path
    dtype: string
  splits:
  - name: test
    num_bytes: 888818482
    num_examples: 350
  download_size: 887208210
  dataset_size: 888818482
configs:
- config_name: default
  data_files:
  - split: test
    path: data/test-*
---

# UI-Redline-bench

This dataset contains the benchmark data for the paper **"UI-Redline-bench: 赤入れ指示によるWebUIコード修正ベンチマーク"**.

The benchmark evaluates the capability of Vision-Language Models (VLMs) to modify Web UI code (HTML/CSS) based on visual "redline" instructions (handwritten or digital) drawn on screenshots.

**📄 [Paper](https://www.anlp.jp/proceedings/annual_meeting/2026/pdf_dir/B8-1.pdf)** | **💻 [GitHub Repository (Evaluation Code & Runnable Environment)](https://github.com/future-architect/UI-Redline-bench)** 

## Dataset Description

* **Repository:** [future-architect/UI-Redline-bench](https://github.com/future-architect/UI-Redline-bench)
* **Total Instances:** 350
* **Web Types:** News, Online Store, Portfolio
* **CSS Frameworks:** Vanilla, Bootstrap, Tailwind CSS
* **Modification Categories:** Layout, Color Contrast, Text Readability, Button Usability, Learnability

### Usage

This Hugging Face dataset contains only the images and the corresponding metadata.
To run the experiments, please follow the steps below to clone the GitHub repository for evaluation scripts and place the dataset accordingly.
You can optionally clone this Hugging Face repository if you want to inspect the images manually.

```bash
mkdir ui-redline-workspace
cd ui-redline-workspace

# 1. Clone the GitHub repository (REQUIRED for running code)
git clone https://github.com/future-architect/UI-Redline-bench.git

# 2. (Optional) Clone this Hugging Face dataset
# Only needed if you want to browse instruction images manually on your local machine.
# The python script will download the dataset automatically via API.

# Initialize Git LFS (Required to download large image/parquet files)
git lfs install

# Clone into a specific directory to avoid name conflict
git clone https://huggingface.co/datasets/future-architect/UI-Redline-bench UI-Redline-bench-dataset

```

The resulting directory structure should look like this:

```text
ui-redline-workspace/
├── UI-Redline-bench/           # GitHub Repo: Scripts, HTML, CSS (The execution environment)
│   ├── data/
│   ├── script/
│   └── ...
└── UI-Redline-bench-dataset/   # HF Repo: (Optional) For manual image inspection
    └── ...

```

## Dataset Structure

Each record represents a modification task. The file paths provided (`src_html_path`, etc.) are relative to the root of the **GitHub repository** (`UI-Redline-bench/`).

| Field | Type | Description |
| --- | --- | --- |
| `src_html_path` | string | Relative path to the **original** HTML code (e.g., `data/news/bootstrap/src/index.html`). |
| `src_css_path` | string | Relative path to the **original** CSS code. |
| `web_type` | string | Type of the website (`news`, `onlinestore`, `portfolio`). |
| `css_framework` | string | CSS framework used (`vanilla`, `bootstrap`, `tailwind`). |
| `image_instruct` | image | The visual instruction (redline) image input for the VLM. |
| `modification_category` | string | Category of the modification (e.g., `layout`, `color_contrast`). |
| `style` | string | Style of the visual instruction (`digital` or `handwritten`). |
| `image_has_arrow` | bool | Whether the instruction image contains arrows. |
| `image_has_enclosure` | bool | Whether the instruction image contains enclosures/bounding boxes. |
| `image_has_ui_sketch` | bool | Whether the instruction image contains sketches of new UI elements. |
| `ref_html_path` | string | Relative path to the **ground truth** HTML code. |
| `ref_css_path` | string | Relative path to the **ground truth** CSS code. |

## Usage Example (Running Inference)

This example demonstrates how to load the dataset and run inference by importing the scripts directly from the cloned GitHub repository.
Save the following code as `run_benchmark.py` in your `ui-redline-workspace` directory.

```python
import os
import sys
from datasets import load_dataset

# 1. Setup Paths
# Assuming you are in the 'ui-redline-workspace' directory.
GITHUB_REPO_ROOT = os.path.abspath("./UI-Redline-bench") 
HF_DATASET_PATH = os.path.abspath("./UI-Redline-bench-dataset")
OUTPUT_DIR = os.path.abspath("./output_results")

# 2. Add GitHub script directory to sys.path to allow imports
sys.path.append(os.path.join(GITHUB_REPO_ROOT, "script"))

try:
    # -------------------------------------------------------------------------
    # IMPORT THE TARGET MODEL SCRIPT HERE
    # Change this line depending on the model you want to evaluate:
    #   from prediction_based_on_image_gpt5 import process_sample
    #   from prediction_based_on_image_claude import process_sample
    #   from prediction_based_on_image_gemini import process_sample
    #   from prediction_based_on_image_qwen import process_sample
    # -------------------------------------------------------------------------
    from prediction_based_on_image_gemini import process_sample
except ImportError as e:
    print("Error importing scripts. Make sure you are running this script with the correct environment (e.g., via 'uv run').")
    raise e

# 3. Load Dataset
if os.path.exists(HF_DATASET_PATH):
    print(f"Loading dataset locally from: {HF_DATASET_PATH}")
    ds = load_dataset(HF_DATASET_PATH, split="test")
else:
    print("Local dataset not found. Downloading from Hugging Face Hub...")
    ds = load_dataset("future-architect/UI-Redline-bench", split="test")

# 4. Iterate and Run Inference
for example in ds:
    # The dataset returns a PIL.Image object, which can be passed directly to the scripts.
    img_input = example['image_instruct']

    # Construct absolute paths for HTML/CSS
    html_path = os.path.join(GITHUB_REPO_ROOT, example['src_html_path'])
    css_path = os.path.join(GITHUB_REPO_ROOT, example['src_css_path'])
    
    # Construct output directory for this case
    case_output_dir = os.path.join(OUTPUT_DIR, os.path.dirname(example['ref_html_path']))
    
    print(f"Processing: {html_path}")
    
    # Call the imported function directly
    process_sample(
        html_path=html_path,
        css_path=css_path,
        image_path=img_input,
        output_dir=case_output_dir
    )

print("Inference completed.")

```

### How to execute the script

Since we use `uv` for dependency management, you must run the script using the correct environment defined in the GitHub repository.

**For GPT, Claude, and Gemini (API-based models):**
Use the `cpu-env`.

```bash
uv run --project UI-Redline-bench/cpu-env python run_benchmark.py

```

**For Qwen (Local vLLM model):**
Use the `gpu-env`. Make sure you have started the vLLM server beforehand.

```bash
# 1. Start the server (in a separate terminal)
uv run --project UI-Redline-bench/gpu-env bash UI-Redline-bench/script/launch_vllm_server.sh

# 2. Run the benchmark
uv run --project UI-Redline-bench/gpu-env python run_benchmark.py

```

## Citation

If you use this dataset, please cite our paper:

```bibtex
@inproceedings{hiai2026uiredline,
  title={UI-Redline-bench: 赤入れ指示によるWebUIコード修正ベンチマーク},
  author={肥合智史 and 藤井諒 and 岸波洋介 and 森下睦},
  booktitle={Proceedings of the 32nd Annual Meeting of the Association for Natural Language Processing (NLP2026)},
  year={2026}
}

```