File size: 1,323 Bytes
c6bec71
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
# eval.yaml — registers ClawBench as a Benchmark dataset on the Hub.
# See: https://huggingface.co/docs/hub/eval-results#benchmark-datasets
# This file unlocks the native "🏆 Leaderboard" tab and "Official benchmark"
# badge once the dataset is added to HF's benchmark allow-list.

name: ClawBench
description: >
  ClawBench is an open benchmark for AI web agents — the systems that drive a
  real browser to complete a user's task end-to-end. It scores agents on
  real, everyday online tasks (booking flights, ordering groceries, submitting
  job applications) across live websites. V1 ships 153 tasks across 144
  websites (the original frontier-model leaderboard); V2 ships 130 newer
  tasks (expanded coverage). For each run we capture five layers of behavioral
  data — session replay, HTTP traffic, browser actions, agent reasoning, and
  the final intercepted request — plus human ground-truth, then score with
  an agentic evaluator that produces step-level traceable diagnostics.

# Pending — needs PR to https://github.com/huggingface/huggingface.js/blob/main/packages/tasks/src/eval.ts
# to add `clawbench-eval` as a canonical framework before this validates.
evaluation_framework: clawbench-eval

tasks:
  - id: v1
    config: default
    split: test
  - id: v2
    config: default
    split: test