The dataset is currently empty. Upload or create new data files. Then, you will be able to explore them in the Dataset Viewer.

anchor_tasks_web — Dataset

A web-app generation benchmark. Each task is a multi-page UI taken from a public Figma community file. For every task we ship the textual page descriptions, the rendered mockup PNGs, the Figma node structure, the per-page click-annotations, and the distilled data-testid "anchors" that an evaluator uses to score a generated app.

The dataset is delivered in two layers:

  1. Ground-truth layer — one folder per task (1_newsletter/, 2_real-estate/, ...) holding all assets for that task.
  2. Input bundles — one folder per condition (c0/, c1/, c2/, c3/) holding the strict subset of files that the generation agent receives, paired with a system-prompt file (agent_system_promt_c{N}.md).

metadata.jsonl describes both layers.


Tasks (10)

# Task ID Title Pages Real-world analogues
1 1_newsletter Newsletter / Blog Publication 9 Substack, Beehiiv, Medium
2 2_real-estate Real Estate Listings 14 Zillow, Redfin, Realtor.com
3 3_job-board Job Board 19 Indeed, LinkedIn Jobs, Glassdoor, Wellfound
4 4_forum Forum / Q&A Site 5 Reddit, Stack Overflow, Hacker News
5 5_travel-booking Travel / Tour Booking Site 8 Booking.com, Airbnb experiences, Expedia
6 6_chat Team Chat App 10 Slack, Microsoft Teams, Discord
7 7_cloud-storage Cloud File Storage (with Admin Panel) 33 Google Drive, Dropbox, OneDrive
8 8_ecommerce E-commerce Store 7 Amazon, Shopify storefronts, Etsy
9 9_project-management Project Management Tool 10 Asana, Trello, Jira, Linear
10 10_streaming_music-streaming Music Streaming Platform 13 Spotify, Apple Music, YouTube Music

(Page counts reflect the PNG files actually shipped under <task>/pages/.)


Layout

dataset/
├── README.md                          ← this file
├── metadata.jsonl                     ← machine-readable index (see below)
│
├── agent_system_promt_c0.md           ← system prompt: text-only ground truth
├── agent_system_promt_c1.md           ← system prompt: scaffold + PNG mockup
├── agent_system_promt_c2.md           ← system prompt: PNG mockup, no scaffold
├── agent_system_promt_c3.md           ← system prompt: PNG + Figma JSON, scaffold
├── agent_system_promt_c4.md           ← system prompt: PNG + Figma JSON, no scaffold
│
├── 1_newsletter/                      ← ground truth (canonical) for one task
│   ├── description.md                 ← page-by-page brief, with inline <testid> markers
│   ├── manifest.json                  ← page index ⇄ Figma node ⇄ PNG / structure JSON
│   ├── sitemap.md                     ← inferred navigation graph
│   ├── pages/
│   │   ├── 01_Home.png                ← rendered Figma frame
│   │   ├── 01_Home.json               ← full Figma node tree
│   │   └── 01_Home_structure-only.json← lightweight structure (hierarchy + bbox only)
│   ├── interaction/
│   │   ├── 01_Home_human_interaction_annotation.json  ← per-click annotations
│   │   ├── 01_Home_click_subtype.json                 ← aggregated click subtypes
│   │   └── 01_Home_click_subtype_report.html          ← human-readable QA report
│   ├── 1_newsletter_anchors.raw.json
│   ├── 1_newsletter_anchors.json
│   └── 1_newsletter_anchors.cleaned.json              ← scored testid bboxes (eval ground truth)
│
├── 2_real-estate/                     ← also includes interaction_index{,.verified}.json
├── 3_job-board/
├── ... (one folder per task)
│
├── c0/<task>/                         ← input bundle for condition c0
│   └── description.md
│
├── c1/pick_{A,B,C}/<task>/            ← input bundle for condition c1 (3 framework picks)
│   ├── description.md
│   ├── pages/*.png
│   ├── scaffold/                      ← read-only reference scaffold
│   └── workspace/                     ← writable starting point (scaffold copy)
│
├── c2/<task>/
│   ├── description.md
│   └── pages/*.png
│
└── c3/<task>/
    ├── description.md
    └── pages/
        ├── *.png
        └── *_structure-only.json

Ground-truth assets (per task)

File / Dir What it is
description.md The agent-facing brief. One section per page; every UI element that the eval scores carries an inline <testid> marker (kebab-case, between angle brackets).
manifest.json [{ index, page_name, file_key, node_id, png, structure_json, figma_meta }, ...]. Maps page number → Figma frame.
sitemap.md Auto-generated navigation graph derived from interaction/*.json (only annotations whose type is navigate).
pages/<NN>_<Name>.png Rendered Figma frame for the page (the visual ground truth).
pages/<NN>_<Name>.json Full Figma node tree for the page.
pages/<NN>_<Name>_structure-only.json Pruned node tree (hierarchy + bbox + type, no styles/text content). Cheap input for c3.
interaction/<page>_human_interaction_annotation.json Per-page list of human-tagged clickable elements: { id, type, interactable, navigateTo, node, bbox_png, subtype, reasoning }.
interaction/<page>_click_subtype.json Aggregate counts of click subtypes (click_dead, click_popout, click_navigate, ...).
interaction/<page>_click_subtype_report.html QA visualization of the above.
<task>_anchors.raw.json All annotations promoted to candidate anchors.
<task>_anchors.json Filtered anchors.
<task>_anchors.cleaned.json Final anchors used by the evaluator: { ann_id, testid, reasoning, bbox_png } per page.
interaction_index.json (some tasks) Top-level index of interaction files.
interaction_index.verified.json (some tasks) Same, verified against the manifest.

Testid contract

The evaluator locates each scored anchor with document.querySelector('[data-testid="<value>"]'). Marker syntax in description.md is <value>, not <testid>value</testid> — angle brackets wrap the literal value. Examples: <google>data-testid="google"; <last-name>data-testid="last-name". The cleaned anchors JSON pairs each testid with the bbox in PNG coordinates that the eval matches against.


Conditions (input bundles)

ID What the agent sees Bootstrap System prompt
c0 description.md only (no PNG, no Figma JSON) hand-write everything agent_system_promt_c0.md
c1 description.md + pages/*.png + scaffold/ + workspace/ start from a pre-installed scaffold; 3 framework picks under c1/pick_{A,B,C}/: A=Astro, B=Eleventy, C=Next.js 16 + React 19 + Tailwind v4 agent_system_promt_c1.md
c2 description.md + pages/*.png hand-write everything agent_system_promt_c2.md
c3 description.md + pages/*.png + pages/*_structure-only.json run an idiomatic scaffold yourself agent_system_promt_c3.md
c4 same files as c3 (no separate c4/ bundle is shipped — reuse c3/<task>/) hand-write from scratch (no scaffold artifact allowed) agent_system_promt_c4.md

Why the conditions exist

The conditions form a 2×2×scaffold sweep over visual fidelity inputs vs. starter code:

  • Visual input axis: text-only (c0) → text + PNG (c1, c2) → text + PNG + Figma structure JSON (c3, c4).
  • Bootstrap axis: pre-installed scaffold provided (c1) vs. agent picks/runs scaffold (c3) vs. no scaffold allowed / hand-write (c0, c2, c4).
  • c1's three picks isolate framework-choice variance under otherwise-identical inputs.

Sample count: c0(10) + c1(10×3) + c2(10) + c3(10) = 60 input bundles. (c4 reuses c3 inputs and only swaps the system prompt, so it adds 10 more agent runs but no new files on disk.)


metadata.jsonl

One JSON object per line; mixed kind field. Top-to-bottom order: dataset header (1) → task records (10) → sample records (60).

kind: "dataset"

Header with name, conditions list, and the system-prompt files that pair with each condition.

kind: "task"

One per ground-truth task folder.

{
  "kind": "task",
  "task_id": "1_newsletter",
  "task_index": 1,
  "task_title": "Task 1 — Newsletter / Blog Publication",
  "brand": "*BlogSprout* — a multi-author lifestyle/travel blog",
  "real_world_analogues": "Substack, Beehiiv, Medium",
  "figma_source": "Blog Sprout UI — FREE Figma Blog Web UI Kit and Design System (community)",
  "figma_file_key": "ymUMepRI8Lkg3TlPp5gqw0",
  "num_pages": 9,
  "pages_declared": 9,
  "page_files": ["01_Home.png", "02_Single-post.png", ...],
  "num_interactions_annotated": 9,
  "ground_truth_root": "1_newsletter",
  "anchor_files": ["1_newsletter_anchors.cleaned.json", "1_newsletter_anchors.json", "1_newsletter_anchors.raw.json"],
  "has_sitemap": true,
  "has_interaction_index": false
}

kind: "sample"

One per (condition × task[ × pick]).

{
  "kind": "sample",
  "id": "c1/pick_A/1_newsletter",
  "condition": "c1",
  "pick": "A",
  "framework_pick_description": "Astro (with @astrojs/mdx, @astrojs/sitemap, @astrojs/rss).",
  "task_id": "1_newsletter",
  "task_index": 1,
  "task_title": "Task 1 — Newsletter / Blog Publication",
  "num_pages": 9,
  "agent_system_prompt": "agent_system_promt_c1.md",
  "input_root": "c1/pick_A/1_newsletter",
  "inputs": ["description.md", "pages/*.png", "scaffold/", "workspace/"],
  "ground_truth_root": "1_newsletter"
}

input_root is what the agent should be given as the project root. ground_truth_root is the canonical task folder used for evaluation only — never expose it to the agent.


Quick recipes

Iterate over all c1/pick_A samples:

python3 - <<'PY'
import json
for line in open("metadata.jsonl"):
    r = json.loads(line)
    if r.get("kind") == "sample" and r["condition"] == "c1" and r.get("pick") == "A":
        print(r["id"], "→", r["input_root"])
PY

Diff expected vs. actual testids for a generated app (matches the system-prompt verification step):

TASK=1_newsletter
APP_DIR=/path/to/generated/app
grep -roE '<[a-z][a-z0-9_-]+>' "$TASK/description.md" | sed -E 's/.*<([^>]+)>.*/\1/' | sort -u > /tmp/expected
grep -roE 'data-testid="[a-z][a-z0-9_-]*"' "$APP_DIR" | sed -E 's/.*"([^"]+)".*/\1/' | sort -u > /tmp/actual
diff /tmp/expected /tmp/actual

Pull the cleaned anchors for evaluation:

jq '.anchors["01_Home"]' 1_newsletter/1_newsletter_anchors.cleaned.json

Notes / gotchas

  • pages_declared (from the description's **Pages:** N header) can disagree with num_pages (count of PNGs on disk) when the description groups several mockups under one logical page (e.g. settings tabs in 10_streaming_music-streaming).
  • manifest.json may contain entries with empty file_key / node_id for pages whose Figma annotation file was missing at extraction time (see the _note field on those entries). The PNG and structure JSON are still present.
  • pages/*.json is the full Figma node tree; pages/*_structure-only.json is the pruned version. Only the latter is shipped in c3/ to keep input size manageable.
  • c1/<pick>/<task>/scaffold/ and c1/<pick>/<task>/workspace/ start as identical copies. scaffold/ is read-only reference; the agent edits workspace/.
Downloads last month
9