license: mit
task_categories:
- text-generation
language:
- en
tags:
- benchmark
- code
- llm-evaluation
- feature-addition
- python
size_categories:
- n<1K
configs:
- config_name: default
data_files:
- split: test
path: tensorbench.json
TensorBench
Feature-addition benchmark for LLMs and coding agents, evaluated against the Scorch codebase. Each task asks a model to add a feature (or otherwise extend functionality) to Scorch. Success is defined as the full pytest suite (original + any new tests the model adds) passing after the patch is applied inside a Docker container.
This is the dataset artifact for the TensorBench paper (NeurIPS 2026 Evaluations & Datasets track, double-blind submission).
At a glance
| Tasks | 199 (194 feature-addition, 5 refactor) |
| Target codebase | bobbyyyan/scorch — a sparse+dense PyTorch compiler |
| Base commits | 5 distinct SHAs across the bench branch |
| Language | Python (with C++ extension) |
| Test runner | pytest -v inside Docker |
Files
| File | Purpose |
|---|---|
tensorbench.json |
Task list. JSON array of 199 task records. |
Dockerfile |
Eval image: python:3.11-slim + Scorch's C++ build deps + the upstream bench clone. |
run_tests.sh |
Container CMD: rebuilds the C++ extension and runs pytest. |
croissant.json |
Croissant metadata (core + RAI fields). |
Task schema
Each record in tensorbench.json has:
| Field | Type | Description |
|---|---|---|
instance_id |
string | e.g. bobbyyyan__scorch-feature_kernel_fusion. The feature_* / refactor_* suffix follows the original taxonomy; semantics for both is feature-addition. |
repo_id |
string | bobbyyyan__scorch for every task. |
repo_url |
string | Upstream Scorch URL. |
base_commit |
string | The SHA the task is anchored at. The harness git reset --hards to this before applying patches. |
language |
string | python. |
setup_commands |
list[string] | Container-side setup hooks. |
test_command |
string | /testbed/run_tests.sh. |
test_timeout |
int | Seconds. Default 3000. |
refactor_type |
string | Mostly empty (legacy field from the original taxonomy). |
description |
string | Natural-language task prompt the model receives. |
files |
list[string] | Files relevant to the task. |
task_type |
string | feature or refactor. |
categories |
list[string] | Topical tags (kernel, codegen, format, etc.). |
Grading
Run with pytest -v; the grading strategy parses verbose lines (preserving
parametrized brackets like [False]) and uses a custom preservation rule:
success ⇔ after.failed == 0
This handles the common case where an agent adds new tests alongside new code: if every test (original + agent-added) passes, the task is a success. Default exact-match preservation would flag the new tests as "changed" and produce false failures.
How to evaluate a model
The full evaluation harness (codebench) and the project-local benchmark
glue (tensorbench) are in the supplementary code submission.
# 1. Install the harness and benchmark glue
git clone <tensorbench-repo> ~/tensorbench
git clone <codebench-repo> ~/codebench
pip install -e ~/codebench
pip install -r ~/tensorbench/requirements.txt
# 2. Build the eval image
cd ~/tensorbench
docker build -t scorch-eval -f dockerfiles/scorch/Dockerfile dockerfiles/scorch/
# 3. Generate predictions, then grade them
python run.py predict scorch sonnet-4.5
python run.py eval scorch sonnet-4.5
Predictions land in predictions/; eval outputs in evaluation_results/<run_id>/.
Anonymity
This is a double-blind submission. Personally identifying information has been scrubbed from the supplementary code. Reviewers should not attempt to identify the authors via the target codebase URL or other public references.
License
MIT for the benchmark scaffolding (this dataset, Dockerfile, grading strategy). The target Scorch codebase is governed by its own upstream license.
Citation
(Anonymized for double-blind review. Citation will be added at camera-ready.)