Datasets:
The dataset viewer is not available for this split.
Error code: StreamingRowsError
Exception: CastError
Message: Couldn't cast
base_commit: string
contamination_tier: string
created_at: timestamp[s]
environment: struct<bun_lock_sha: string, docker_compose_sha: string, node_version: string, python_version: strin (... 23 chars omitted)
child 0, bun_lock_sha: string
child 1, docker_compose_sha: string
child 2, node_version: string
child 3, python_version: string
child 4, uv_lock_sha: string
fail_to_pass: struct<backend: list<item: string>, frontend: list<item: null>>
child 0, backend: list<item: string>
child 0, item: string
child 1, frontend: list<item: null>
child 0, item: null
head_commit: string
instance_id: string
notes: string
pass_to_pass: struct<backend: list<item: string>, frontend: list<item: null>>
child 0, backend: list<item: string>
child 0, item: string
child 1, frontend: list<item: null>
child 0, item: null
patch: string
pr_author: string
pr_labels: list<item: string>
child 0, item: string
pr_number: int64
pr_title: string
pr_url: string
problem_statement: string
repo: string
schema_version: string
stack_domain: string
test_patch: string
test_patch_backend: string
test_patch_frontend: string
to
{'instance_id': Value('string'), 'repo': Value('string'), 'pr_number': Value('int32'), 'pr_url': Value('string'), 'pr_title': Value('string'), 'base_commit': Value('string'), 'head_commit': Value('string'), 'problem_statement': Value('string'), 'patch': Value('string'), 'test_patch': Value('string'), 'stack_domain': Value('string'), 'contamination_tier': Value('string'), 'created_at': Value('string'), 'schema_version': Value('string')}
because column names don't match
Traceback: Traceback (most recent call last):
File "/src/services/worker/src/worker/utils.py", line 99, in get_rows_or_raise
return get_rows(
^^^^^^^^^
File "/src/libs/libcommon/src/libcommon/utils.py", line 272, in decorator
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/src/services/worker/src/worker/utils.py", line 77, in get_rows
rows_plus_one = list(itertools.islice(ds, rows_max_number + 1))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2690, in __iter__
for key, example in ex_iterable:
^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2227, in __iter__
for key, pa_table in self._iter_arrow():
^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2251, in _iter_arrow
for key, pa_table in self.ex_iterable._iter_arrow():
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 494, in _iter_arrow
for key, pa_table in iterator:
^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 384, in _iter_arrow
for key, pa_table in self.generate_tables_fn(**gen_kwags):
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/json/json.py", line 295, in _generate_tables
self._cast_table(pa_table, json_field_paths=json_field_paths),
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/json/json.py", line 128, in _cast_table
pa_table = table_cast(pa_table, self.info.features.arrow_schema)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 2281, in table_cast
return cast_table_to_schema(table, schema)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 2227, in cast_table_to_schema
raise CastError(
datasets.table.CastError: Couldn't cast
base_commit: string
contamination_tier: string
created_at: timestamp[s]
environment: struct<bun_lock_sha: string, docker_compose_sha: string, node_version: string, python_version: strin (... 23 chars omitted)
child 0, bun_lock_sha: string
child 1, docker_compose_sha: string
child 2, node_version: string
child 3, python_version: string
child 4, uv_lock_sha: string
fail_to_pass: struct<backend: list<item: string>, frontend: list<item: null>>
child 0, backend: list<item: string>
child 0, item: string
child 1, frontend: list<item: null>
child 0, item: null
head_commit: string
instance_id: string
notes: string
pass_to_pass: struct<backend: list<item: string>, frontend: list<item: null>>
child 0, backend: list<item: string>
child 0, item: string
child 1, frontend: list<item: null>
child 0, item: null
patch: string
pr_author: string
pr_labels: list<item: string>
child 0, item: string
pr_number: int64
pr_title: string
pr_url: string
problem_statement: string
repo: string
schema_version: string
stack_domain: string
test_patch: string
test_patch_backend: string
test_patch_frontend: string
to
{'instance_id': Value('string'), 'repo': Value('string'), 'pr_number': Value('int32'), 'pr_url': Value('string'), 'pr_title': Value('string'), 'base_commit': Value('string'), 'head_commit': Value('string'), 'problem_statement': Value('string'), 'patch': Value('string'), 'test_patch': Value('string'), 'stack_domain': Value('string'), 'contamination_tier': Value('string'), 'created_at': Value('string'), 'schema_version': Value('string')}
because column names don't matchNeed help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
YAML Metadata Warning:The task_categories "text2text-generation" is not in the official list: text-classification, token-classification, table-question-answering, question-answering, zero-shot-classification, translation, summarization, feature-extraction, text-generation, fill-mask, sentence-similarity, text-to-speech, text-to-audio, automatic-speech-recognition, audio-to-audio, audio-classification, audio-text-to-text, voice-activity-detection, depth-estimation, image-classification, object-detection, image-segmentation, text-to-image, image-to-text, image-to-image, image-to-video, unconditional-image-generation, video-classification, reinforcement-learning, robotics, tabular-classification, tabular-regression, tabular-to-text, table-to-text, multiple-choice, text-ranking, text-retrieval, time-series-forecasting, text-to-video, image-text-to-text, image-text-to-image, image-text-to-video, visual-question-answering, document-question-answering, zero-shot-image-classification, graph-ml, mask-generation, zero-shot-object-detection, text-to-3d, image-to-3d, image-feature-extraction, video-text-to-text, keypoint-detection, visual-document-retrieval, any-to-any, video-to-video, other
PrototypeBench v0.1
Can your agent ship a full-stack AI-native prototype?
PrototypeBench is an open benchmark for evaluating AI coding agents on full-stack feature shipping. Where SWE-Bench measures bug-fixing in mature Python libraries, PrototypeBench measures "can the agent ship a full-stack feature on a modern AI-native stack?"
- Project home: https://github.com/prototypebench/prototypebench
- Website: https://prototypebench.org
- License: MIT
- Version: v0.1 (initial corpus)
- Language: English (problem statements), Python (backend code), TypeScript/JavaScript (frontend code, future)
Dataset Summary
71 PR-mined task instances from active open-source repositories, each shaped for SWE-Bench-compatible execution-based scoring:
| Stat | Value |
|---|---|
| Total instances | 71 |
| Sources | 2 (fastapi/full-stack-fastapi-template, IBM/mcp-context-forge) |
FAIL_TO_PASS tests |
689 |
PASS_TO_PASS regression-guard tests |
31,644 |
| Total test cases per full eval | 32,333 |
| stack_domain | 71 backend_only (v0.1); frontend & fullstack in later versions |
| contamination_tier | 71 held_out (all post-2026-01-01) |
| Schema version | 0.1 |
Comparison: SWE-Bench Verified has 500 instances, SWE-Bench Lite 300, HumanEval 164. v1 public-beta targets 200β300.
Scoring
Execution-based binary scoring (no LLM-as-judge):
score(instance) = 1 iff FAIL_TO_PASS β passing_tests
AND PASS_TO_PASS β passing_tests (no regression)
0 otherwise
Judge: pytest (backend) and Playwright (frontend, future). Ground truth = the actual merged PR diff (hidden from the agent). See the methodology notes.
Usage
from datasets import load_dataset
ds = load_dataset("banyaaiofficial/prototypebench-v1", split="test")
for item in ds:
print(item["instance_id"]) # e.g. "IBM__mcp-context-forge-4270"
print(item["problem_statement"]) # NL task spec (PR body or closing issue)
base_sha = item["base_commit"] # pre-PR commit β agent starts here
# Agent produces a non-test unified diff against base_sha.
# Score it with the companion harness:
# pbench score --source <short> --pr <N> --patch-file agent_patch.diff
Each instance extends the SWE-Bench instances.jsonl schema with dual-test fields (fail_to_pass.backend / .frontend, test_patch_backend / .frontend) for future Playwright integration.
Full schema: https://github.com/prototypebench/prototypebench/blob/main/schemas/task_instance.schema.json
Source Composition
| Source | Stars | License | Instances | F2P | P2P |
|---|---|---|---|---|---|
fastapi/full-stack-fastapi-template |
42.7k | MIT | 3 | 7 | 77 |
IBM/mcp-context-forge |
3.6k | Apache-2 | 68 | 682 | 31,567 |
All PRs are merged PRs with maintainer-reviewed tests. Task instances mine the natural atomic unit of change (one feature or fix at a time).
Data Fields
See the task-schema doc for full field-by-field semantics. Highlights:
instance_idβ stable unique ID (owner__repo-<pr_number>)base_commit/head_commitβ SHAs bounding the reference changeproblem_statementβ natural-language task spec (from closing issue body, else PR description)patchβ reference solution (non-test diff). Hidden from the agent at evaluation time.test_patchβ test-only diff that the harness applies before running the agent's patchfail_to_passβ{backend: [...], frontend: [...]}β tests the agent must make passpass_to_passβ{backend: [...], frontend: [...]}β regression-guard tests (must not break)stack_domainβbackend_only|frontend_only|fullstackenvironmentβ python_version, node_version, uv_lock_sha, etc. for reproducible buildscontamination_tierβpublic|held_out|internal_only
Contamination & Fairness
- Held-out by construction: all v0.1 instances are merged after 2026-01-01 (Claude Opus 4.7 cutoff). Submitters must disclose their model cutoff for point-count adjustment.
- Rotation: held-out tier is rotated per leaderboard season (Phase 5).
- No vendor branding: benchmark carries no vendor name. Hosted on
banyaaiofficialfor convenience only; the benchmark is project-neutral.
Limitations
- v0.1 is backend-only (no Playwright scoring yet β the harness supports it but frontend-kind PRs are v1+).
- mcp-context-forge 68 instances dominate the corpus β diverse workload coverage is a v1+ priority.
- "test strength = benchmark quality": PRs with weak tests are filtered but not perfectly. Curator review recommended.
- Execution-based scoring requires running tests (not instantaneous) β see the harness for Docker-based reproducible runs.
Related Benchmarks
- SWE-Bench β Python library bug-fixes (2,294 instances). PrototypeBench extends the pattern to modern AI-native full-stack apps.
- SWE-Bench Lite / Verified β curated subsets.
- Terminal-Bench β CLI tasks.
- BigCodeBench β library-usage function-level tasks.
Citation
Citation format will be fixed at Phase 4 public launch. For now:
@misc{prototypebench_v01,
title = {PrototypeBench v0.1: An AI-native Full-Stack Coding Agent Benchmark},
year = {2026},
url = {https://github.com/prototypebench/prototypebench},
note = {71 instances across 2 source repos; execution-based scoring}
}
Changelog
- v0.1 (2026-04-20): initial corpus. 71 backend_only instances, all held_out. Schema v0.1.
- Downloads last month
- 29