MultiInteract-Bench / README.md
zionzionzion's picture
Rename HF_README.md to README.md
6835b61 verified
metadata
license: mit
task_categories:
  - text-to-image
  - image-segmentation
language:
  - en
  - zh
tags:
  - web-interaction
  - multimodal
  - benchmark
  - llm-evaluation
  - vue.js
  - frontend
pretty_name: MultiInteract-Bench
size_categories:
  - n<1K

MultiInteract-Bench Dataset

A Benchmark Dataset for Evaluating Web Interaction Reconstruction from Image Sequences

HuggingFace Dataset Python 3.8+ License: MIT

πŸ“‹ Overview

MultiInteract-Bench is a comprehensive dataset designed to evaluate the capabilities of multimodal large language models in reproducing web-based interactions from image sequences. The dataset contains real-world web interface snapshots showing progressive states of web applications through user interactions.

Key Features

  • Multi-turn Interactions: Each task includes a sequence of web page states showing the progression of user interactions
  • Real-world Applications: Covers popular web applications like Spotify, Stripe, and more
  • Comprehensive Metadata: Each task includes detailed metadata describing interaction steps
  • High-quality Images: PNG format screenshots with clear visual elements
  • Diverse Scenarios: Includes music players, payment forms, and various web UI patterns

πŸ“Š Dataset Structure

Task Format

Each task in the dataset follows this structure:

task_name_timestamp/
β”œβ”€β”€ metadata.json           # Task metadata and interaction descriptions
β”œβ”€β”€ step_00.png           # Initial state (before any interaction)
β”œβ”€β”€ step_01.png           # State after step 1 interaction
β”œβ”€β”€ step_02.png           # State after step 2 interaction
└── ...                   # Additional interaction steps

Metadata Structure

Each metadata.json file contains:

{
  "id": "task_name_timestamp",
  "description": "Brief description of the web application",
  "steps": [
    {
      "step_index": 0,
      "description": "Initial state description",
      "image": "step_00.png"
    },
    {
      "step_index": 1,
      "description": "First interaction description",
      "image": "step_01.png"
    }
  ]
}

πŸ“¦ Dataset Contents

This dataset includes:

  • Total Tasks: Multiple real-world web interaction scenarios
  • Steps per Task: Typically 5-7 interaction steps
  • Image Format: PNG
  • Image Resolution: High-resolution screenshots
  • Applications: Various popular web platforms

🎯 Use Cases

MultiInteract-Bench is designed for:

  1. Model Evaluation: Benchmarking multimodal LLMs on web interaction reconstruction
  2. Web Development: Testing automated web page generation systems
  3. UI/UX Research: Studying web interface patterns and interactions
  4. Computer Vision: Evaluating image-to-code generation capabilities
  5. Agent Systems: Training and testing web automation agents

πŸš€ Quick Start

Download the Dataset

# Using huggingface-cli
huggingface-cli download zionzionzion/MultiInteract-Bench --repo-type dataset

# Or download the zip file directly
wget https://huggingface.co/datasets/zionzionzion/MultiInteract-Bench/resolve/main/dataset_multi_turn.zip
unzip dataset_multi_turn.zip

Load in Python

import json
from pathlib import Path

# Load a specific task
task_path = "dataset_multi_turn/Spotify_1766618072"
with open(f"{task_path}/metadata.json", "r") as f:
    metadata = json.load(f)

print(f"Task: {metadata['id']}")
print(f"Description: {metadata['description']}")
print(f"Number of steps: {len(metadata['steps'])}")

# Access images
for step in metadata['steps']:
    image_path = f"{task_path}/{step['image']}"
    print(f"Step {step['step_index']}: {step['description']}")
    print(f"  Image: {image_path}")

πŸ”§ Related Repository

For the complete evaluation framework including:

  • Model reproduction scripts
  • Visual metrics calculation
  • Automated screenshot capture
  • Statistical analysis tools

Please visit our GitHub repository.

Evaluation Metrics

The associated repository implements 8 evaluation metrics:

  1. CLIP Similarity - Semantic alignment (0-1, higher is better)
  2. LPIPS Distance - Perceptual similarity (0-∞, lower is better)
  3. Style Loss - Artistic style consistency (0-∞, lower is better)
  4. Text Similarity - Text content preservation (0-1, higher is better)
  5. Color Histogram Similarity - Color distribution (0-1, higher is better)
  6. Dominant Color Similarity - Primary color consistency (0-1, higher is better)
  7. DINO Similarity - Structural layout (0-1, higher is better)
  8. SSIM - Structural fidelity (0-1, higher is better)

πŸ“Š Dataset Statistics

Metric Value
Total Tasks Multiple scenarios
Total Images 5-7 per task
Image Format PNG
Metadata Format JSON
Languages English, Chinese

πŸ“ Citation

If you use MultiInteract-Bench in your research, please cite:

@dataset{multinteract_bench_2026,
  title = {MultiInteract-Bench: A Benchmark Dataset for Evaluating Web Interaction Reconstruction from Image Sequences},
  author = {Yang, Tiankun},
  year = {2026},
  publisher = {HuggingFace},
  url = {https://huggingface.co/datasets/zionzionzion/MultiInteract-Bench}
}

πŸ“§ Contact

For questions, issues, or suggestions regarding this dataset, please contact:

Email: yangtiankun25@mails.ucas.cn

πŸ“„ License

This dataset is provided under the MIT License. See the LICENSE file for details.

πŸ”— Links


Note: This dataset is intended for research and educational purposes. Please respect the terms of service of the web applications from which screenshots were captured.