--- license: mit task_categories: - text-to-image - image-segmentation language: - en - zh tags: - web-interaction - multimodal - benchmark - llm-evaluation - vue.js - frontend pretty_name: MultiInteract-Bench size_categories: - n<1K --- # MultiInteract-Bench Dataset
**A Benchmark Dataset for Evaluating Web Interaction Reconstruction from Image Sequences** [![HuggingFace Dataset](https://img.shields.io/badge/Dataset-HuggingFace-yellow.svg)](https://huggingface.co/datasets/zionzionzion/MultiInteract-Bench) [![Python 3.8+](https://img.shields.io/badge/python-3.8+-blue.svg)](https://www.python.org/downloads/) [![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)
## 📋 Overview MultiInteract-Bench is a comprehensive dataset designed to evaluate the capabilities of multimodal large language models in reproducing web-based interactions from image sequences. The dataset contains real-world web interface snapshots showing progressive states of web applications through user interactions. ### Key Features - **Multi-turn Interactions**: Each task includes a sequence of web page states showing the progression of user interactions - **Real-world Applications**: Covers popular web applications like Spotify, Stripe, and more - **Comprehensive Metadata**: Each task includes detailed metadata describing interaction steps - **High-quality Images**: PNG format screenshots with clear visual elements - **Diverse Scenarios**: Includes music players, payment forms, and various web UI patterns ## 📊 Dataset Structure ### Task Format Each task in the dataset follows this structure: ``` task_name_timestamp/ ├── metadata.json # Task metadata and interaction descriptions ├── step_00.png # Initial state (before any interaction) ├── step_01.png # State after step 1 interaction ├── step_02.png # State after step 2 interaction └── ... # Additional interaction steps ``` ### Metadata Structure Each `metadata.json` file contains: ```json { "id": "task_name_timestamp", "description": "Brief description of the web application", "steps": [ { "step_index": 0, "description": "Initial state description", "image": "step_00.png" }, { "step_index": 1, "description": "First interaction description", "image": "step_01.png" } ] } ``` ## 📦 Dataset Contents This dataset includes: - **Total Tasks**: Multiple real-world web interaction scenarios - **Steps per Task**: Typically 5-7 interaction steps - **Image Format**: PNG - **Image Resolution**: High-resolution screenshots - **Applications**: Various popular web platforms ## 🎯 Use Cases MultiInteract-Bench is designed for: 1. **Model Evaluation**: Benchmarking multimodal LLMs on web interaction reconstruction 2. **Web Development**: Testing automated web page generation systems 3. **UI/UX Research**: Studying web interface patterns and interactions 4. **Computer Vision**: Evaluating image-to-code generation capabilities 5. **Agent Systems**: Training and testing web automation agents ## 🚀 Quick Start ### Download the Dataset ```bash # Using huggingface-cli huggingface-cli download zionzionzion/MultiInteract-Bench --repo-type dataset # Or download the zip file directly wget https://huggingface.co/datasets/zionzionzion/MultiInteract-Bench/resolve/main/dataset_multi_turn.zip unzip dataset_multi_turn.zip ``` ### Load in Python ```python import json from pathlib import Path # Load a specific task task_path = "dataset_multi_turn/Spotify_1766618072" with open(f"{task_path}/metadata.json", "r") as f: metadata = json.load(f) print(f"Task: {metadata['id']}") print(f"Description: {metadata['description']}") print(f"Number of steps: {len(metadata['steps'])}") # Access images for step in metadata['steps']: image_path = f"{task_path}/{step['image']}" print(f"Step {step['step_index']}: {step['description']}") print(f" Image: {image_path}") ``` ## 🔧 Related Repository For the complete evaluation framework including: - Model reproduction scripts - Visual metrics calculation - Automated screenshot capture - Statistical analysis tools Please visit our [GitHub repository](https://github.com/zion-zion-zion/MultiInteract-Bench). ### Evaluation Metrics The associated repository implements 8 evaluation metrics: 1. **CLIP Similarity** - Semantic alignment (0-1, higher is better) 2. **LPIPS Distance** - Perceptual similarity (0-∞, lower is better) 3. **Style Loss** - Artistic style consistency (0-∞, lower is better) 4. **Text Similarity** - Text content preservation (0-1, higher is better) 5. **Color Histogram Similarity** - Color distribution (0-1, higher is better) 6. **Dominant Color Similarity** - Primary color consistency (0-1, higher is better) 7. **DINO Similarity** - Structural layout (0-1, higher is better) 8. **SSIM** - Structural fidelity (0-1, higher is better) ## 📊 Dataset Statistics | Metric | Value | |--------|-------| | Total Tasks | Multiple scenarios | | Total Images | 5-7 per task | | Image Format | PNG | | Metadata Format | JSON | | Languages | English, Chinese | ## 📝 Citation If you use MultiInteract-Bench in your research, please cite: ```bibtex @dataset{multinteract_bench_2026, title = {MultiInteract-Bench: A Benchmark Dataset for Evaluating Web Interaction Reconstruction from Image Sequences}, author = {Yang, Tiankun}, year = {2026}, publisher = {HuggingFace}, url = {https://huggingface.co/datasets/zionzionzion/MultiInteract-Bench} } ``` ## 📧 Contact For questions, issues, or suggestions regarding this dataset, please contact: **Email**: yangtiankun25@mails.ucas.cn ## 📄 License This dataset is provided under the MIT License. See the LICENSE file for details. ## 🔗 Links - [GitHub Repository](https://github.com/zion-zion-zion/MultiInteract-Bench) - [Dataset Download](https://huggingface.co/datasets/zionzionzion/MultiInteract-Bench) - [HuggingFace Space](https://huggingface.co/spaces/zionzionzion/) (if applicable) --- **Note**: This dataset is intended for research and educational purposes. Please respect the terms of service of the web applications from which screenshots were captured.