File size: 1,461 Bytes
e7453de
 
 
fd3e98a
 
 
 
 
 
 
 
 
e7453de
7cd4310
fd3e98a
 
 
 
 
0c16ba2
e7453de
0c16ba2
 
 
 
 
 
 
 
 
 
 
7cd4310
 
0c16ba2
7cd4310
0c16ba2
 
 
 
 
 
7cd4310
0c16ba2
 
 
 
 
 
7cd4310
0c16ba2
7cd4310
0c16ba2
 
 
 
7cd4310
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
---
pretty_name: WebAppEval
dataset_type: benchmark
language:
  - en
tags:
  - web-agents
  - autonomous-agents
  - benchmarking
  - web-evaluation
  - docker
  - human-computer-interaction
---




# WebAppEval Dataset

WebAppEval is a benchmark dataset for evaluating autonomous web agents on
real-world web applications and is designed to assess agents’ abilities to navigate, reason, and act
within realistic web environments.

---

## 🔍 HuggingFace Preview

This HuggingFace repository provides a **lightweight JSONL preview** of the dataset
to illustrate the task format and enable quick inspection via the Dataset Viewer.

⚠️ **Important note**:
- The JSONL file hosted here is intended **for demonstration and preview purposes only**.
---

## 📦 Full Dataset, Evaluation Logic, and Environments

The **complete WebAppEval benchmark**, including:
- full nested task definitions
- detailed evaluation rules (DOM / URL / string matching)
- Dockerized web application environments
- execution and evaluation scripts
- step-by-step setup and usage instructions

is hosted on GitHub:

👉 **Full dataset, Docker environments, and documentation**:  
https://github.com/nguyennguyen6bk/WebAppEval

---

## 🐳 Execution Environment

All benchmark environments are provided as **Docker containers** to ensure
reproducibility and ease of setup.  
Instructions for building, running, and evaluating agents are available
in the GitHub repository.

---