| | --- |
| | dataset_info: |
| | features: |
| | - name: src_html_path |
| | dtype: string |
| | - name: src_css_path |
| | dtype: string |
| | - name: web_type |
| | dtype: string |
| | - name: css_framework |
| | dtype: string |
| | - name: image_instruct |
| | dtype: image |
| | - name: modification_category |
| | dtype: string |
| | - name: style |
| | dtype: string |
| | - name: image_has_arrow |
| | dtype: bool |
| | - name: image_has_enclosure |
| | dtype: bool |
| | - name: image_has_ui_sketch |
| | dtype: bool |
| | - name: ref_html_path |
| | dtype: string |
| | - name: ref_css_path |
| | dtype: string |
| | splits: |
| | - name: test |
| | num_bytes: 888818482 |
| | num_examples: 350 |
| | download_size: 887208210 |
| | dataset_size: 888818482 |
| | configs: |
| | - config_name: default |
| | data_files: |
| | - split: test |
| | path: data/test-* |
| | --- |
| | |
| | # UI-Redline-bench |
| |
|
| | This dataset contains the benchmark data for the paper **"UI-Redline-bench: 赤入れ指示によるWebUIコード修正ベンチマーク"**. |
| |
|
| | The benchmark evaluates the capability of Vision-Language Models (VLMs) to modify Web UI code (HTML/CSS) based on visual "redline" instructions (handwritten or digital) drawn on screenshots. |
| |
|
| | **📄 [Paper](https://www.anlp.jp/proceedings/annual_meeting/2026/pdf_dir/B8-1.pdf)** | **💻 [GitHub Repository (Evaluation Code & Runnable Environment)](https://github.com/future-architect/UI-Redline-bench)** |
| |
|
| | ## Dataset Description |
| |
|
| | * **Repository:** [future-architect/UI-Redline-bench](https://github.com/future-architect/UI-Redline-bench) |
| | * **Total Instances:** 350 |
| | * **Web Types:** News, Online Store, Portfolio |
| | * **CSS Frameworks:** Vanilla, Bootstrap, Tailwind CSS |
| | * **Modification Categories:** Layout, Color Contrast, Text Readability, Button Usability, Learnability |
| |
|
| | ### Usage |
| |
|
| | This Hugging Face dataset contains only the images and the corresponding metadata. |
| | To run the experiments, please follow the steps below to clone the GitHub repository for evaluation scripts and place the dataset accordingly. |
| | You can optionally clone this Hugging Face repository if you want to inspect the images manually. |
| |
|
| | ```bash |
| | mkdir ui-redline-workspace |
| | cd ui-redline-workspace |
| | |
| | # 1. Clone the GitHub repository (REQUIRED for running code) |
| | git clone https://github.com/future-architect/UI-Redline-bench.git |
| | |
| | # 2. (Optional) Clone this Hugging Face dataset |
| | # Only needed if you want to browse instruction images manually on your local machine. |
| | # The python script will download the dataset automatically via API. |
| | |
| | # Initialize Git LFS (Required to download large image/parquet files) |
| | git lfs install |
| | |
| | # Clone into a specific directory to avoid name conflict |
| | git clone https://huggingface.co/datasets/future-architect/UI-Redline-bench UI-Redline-bench-dataset |
| | |
| | ``` |
| |
|
| | The resulting directory structure should look like this: |
| |
|
| | ```text |
| | ui-redline-workspace/ |
| | ├── UI-Redline-bench/ # GitHub Repo: Scripts, HTML, CSS (The execution environment) |
| | │ ├── data/ |
| | │ ├── script/ |
| | │ └── ... |
| | └── UI-Redline-bench-dataset/ # HF Repo: (Optional) For manual image inspection |
| | └── ... |
| | |
| | ``` |
| |
|
| | ## Dataset Structure |
| |
|
| | Each record represents a modification task. The file paths provided (`src_html_path`, etc.) are relative to the root of the **GitHub repository** (`UI-Redline-bench/`). |
| |
|
| | | Field | Type | Description | |
| | | --- | --- | --- | |
| | | `src_html_path` | string | Relative path to the **original** HTML code (e.g., `data/news/bootstrap/src/index.html`). | |
| | | `src_css_path` | string | Relative path to the **original** CSS code. | |
| | | `web_type` | string | Type of the website (`news`, `onlinestore`, `portfolio`). | |
| | | `css_framework` | string | CSS framework used (`vanilla`, `bootstrap`, `tailwind`). | |
| | | `image_instruct` | image | The visual instruction (redline) image input for the VLM. | |
| | | `modification_category` | string | Category of the modification (e.g., `layout`, `color_contrast`). | |
| | | `style` | string | Style of the visual instruction (`digital` or `handwritten`). | |
| | | `image_has_arrow` | bool | Whether the instruction image contains arrows. | |
| | | `image_has_enclosure` | bool | Whether the instruction image contains enclosures/bounding boxes. | |
| | | `image_has_ui_sketch` | bool | Whether the instruction image contains sketches of new UI elements. | |
| | | `ref_html_path` | string | Relative path to the **ground truth** HTML code. | |
| | | `ref_css_path` | string | Relative path to the **ground truth** CSS code. | |
| |
|
| | ## Usage Example (Running Inference) |
| |
|
| | This example demonstrates how to load the dataset and run inference by importing the scripts directly from the cloned GitHub repository. |
| | Save the following code as `run_benchmark.py` in your `ui-redline-workspace` directory. |
| |
|
| | ```python |
| | import os |
| | import sys |
| | from datasets import load_dataset |
| | |
| | # 1. Setup Paths |
| | # Assuming you are in the 'ui-redline-workspace' directory. |
| | GITHUB_REPO_ROOT = os.path.abspath("./UI-Redline-bench") |
| | HF_DATASET_PATH = os.path.abspath("./UI-Redline-bench-dataset") |
| | OUTPUT_DIR = os.path.abspath("./output_results") |
| | |
| | # 2. Add GitHub script directory to sys.path to allow imports |
| | sys.path.append(os.path.join(GITHUB_REPO_ROOT, "script")) |
| | |
| | try: |
| | # ------------------------------------------------------------------------- |
| | # IMPORT THE TARGET MODEL SCRIPT HERE |
| | # Change this line depending on the model you want to evaluate: |
| | # from prediction_based_on_image_gpt5 import process_sample |
| | # from prediction_based_on_image_claude import process_sample |
| | # from prediction_based_on_image_gemini import process_sample |
| | # from prediction_based_on_image_qwen import process_sample |
| | # ------------------------------------------------------------------------- |
| | from prediction_based_on_image_gemini import process_sample |
| | except ImportError as e: |
| | print("Error importing scripts. Make sure you are running this script with the correct environment (e.g., via 'uv run').") |
| | raise e |
| | |
| | # 3. Load Dataset |
| | if os.path.exists(HF_DATASET_PATH): |
| | print(f"Loading dataset locally from: {HF_DATASET_PATH}") |
| | ds = load_dataset(HF_DATASET_PATH, split="test") |
| | else: |
| | print("Local dataset not found. Downloading from Hugging Face Hub...") |
| | ds = load_dataset("future-architect/UI-Redline-bench", split="test") |
| | |
| | # 4. Iterate and Run Inference |
| | for example in ds: |
| | # The dataset returns a PIL.Image object, which can be passed directly to the scripts. |
| | img_input = example['image_instruct'] |
| | |
| | # Construct absolute paths for HTML/CSS |
| | html_path = os.path.join(GITHUB_REPO_ROOT, example['src_html_path']) |
| | css_path = os.path.join(GITHUB_REPO_ROOT, example['src_css_path']) |
| | |
| | # Construct output directory for this case |
| | case_output_dir = os.path.join(OUTPUT_DIR, os.path.dirname(example['ref_html_path'])) |
| | |
| | print(f"Processing: {html_path}") |
| | |
| | # Call the imported function directly |
| | process_sample( |
| | html_path=html_path, |
| | css_path=css_path, |
| | image_path=img_input, |
| | output_dir=case_output_dir |
| | ) |
| | |
| | print("Inference completed.") |
| | |
| | ``` |
| |
|
| | ### How to execute the script |
| |
|
| | Since we use `uv` for dependency management, you must run the script using the correct environment defined in the GitHub repository. |
| |
|
| | **For GPT, Claude, and Gemini (API-based models):** |
| | Use the `cpu-env`. |
| |
|
| | ```bash |
| | uv run --project UI-Redline-bench/cpu-env python run_benchmark.py |
| | |
| | ``` |
| |
|
| | **For Qwen (Local vLLM model):** |
| | Use the `gpu-env`. Make sure you have started the vLLM server beforehand. |
| |
|
| | ```bash |
| | # 1. Start the server (in a separate terminal) |
| | uv run --project UI-Redline-bench/gpu-env bash UI-Redline-bench/script/launch_vllm_server.sh |
| | |
| | # 2. Run the benchmark |
| | uv run --project UI-Redline-bench/gpu-env python run_benchmark.py |
| | |
| | ``` |
| |
|
| | ## Citation |
| |
|
| | If you use this dataset, please cite our paper: |
| |
|
| | ```bibtex |
| | @inproceedings{hiai2026uiredline, |
| | title={UI-Redline-bench: 赤入れ指示によるWebUIコード修正ベンチマーク}, |
| | author={肥合智史 and 藤井諒 and 岸波洋介 and 森下睦}, |
| | booktitle={Proceedings of the 32nd Annual Meeting of the Association for Natural Language Processing (NLP2026)}, |
| | year={2026} |
| | } |
| | |
| | ``` |
| |
|