WebAppEval / README.md
nguyennguyen6bk's picture
Update README.md
7b7e7b1 verified
metadata
pretty_name: WebAppEval
dataset_type: benchmark
language:
  - en
tags:
  - web-agents
  - autonomous-agents
  - benchmarking
  - web-evaluation
  - docker
  - human-computer-interaction

WebAppEval Dataset

WebAppEval is a benchmark dataset for evaluating autonomous web agents on real-world web applications and is designed to assess agents’ abilities to navigate, reason, and act within realistic web environments.


🔍 HuggingFace Preview

This HuggingFace repository provides a lightweight JSONL preview of the dataset to illustrate the task format and enable quick inspection via the Dataset Viewer.

⚠️ Important note: - The JSONL file hosted here is intended for demonstration and preview purposes only.

📦 Full Dataset, Evaluation Logic, and Environments

The complete WebAppEval benchmark, including:

  • full nested task definitions
  • detailed evaluation rules (DOM / URL / string matching)
  • Dockerized web application environments
  • execution and evaluation scripts
  • step-by-step setup and usage instructions

is hosted on GitHub:

👉 Full dataset, Docker environments, and documentation:
https://github.com/nguyennguyen6bk/WebAppEval


🐳 Execution Environment

All benchmark environments are provided as Docker containers to ensure reproducibility and ease of setup.
Instructions for building, running, and evaluating agents are available in the GitHub repository.