YAML Metadata Error:Invalid content in eval.yaml.

Check out the documentation for more information.

Show details
✖ Invalid input → at evaluation_framework
ClawBench / eval.yaml
AgPerry's picture
Duplicate from NAIL-Group/ClawBench
c6bec71
# eval.yaml — registers ClawBench as a Benchmark dataset on the Hub.
# See: https://huggingface.co/docs/hub/eval-results#benchmark-datasets
# This file unlocks the native "🏆 Leaderboard" tab and "Official benchmark"
# badge once the dataset is added to HF's benchmark allow-list.
name: ClawBench
description: >
ClawBench is an open benchmark for AI web agents — the systems that drive a
real browser to complete a user's task end-to-end. It scores agents on
real, everyday online tasks (booking flights, ordering groceries, submitting
job applications) across live websites. V1 ships 153 tasks across 144
websites (the original frontier-model leaderboard); V2 ships 130 newer
tasks (expanded coverage). For each run we capture five layers of behavioral
data — session replay, HTTP traffic, browser actions, agent reasoning, and
the final intercepted request — plus human ground-truth, then score with
an agentic evaluator that produces step-level traceable diagnostics.
# Pending — needs PR to https://github.com/huggingface/huggingface.js/blob/main/packages/tasks/src/eval.ts
# to add `clawbench-eval` as a canonical framework before this validates.
evaluation_framework: clawbench-eval
tasks:
- id: v1
config: default
split: test
- id: v2
config: default
split: test