File size: 4,255 Bytes
0e74d35 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 | ---
license: mit
task_categories:
- text-generation
language:
- en
tags:
- benchmark
- code
- llm-evaluation
- feature-addition
- python
size_categories:
- n<1K
configs:
- config_name: default
data_files:
- split: test
path: tensorbench.json
---
# TensorBench
Feature-addition benchmark for LLMs and coding agents, evaluated against the
[Scorch](https://github.com/bobbyyyan/scorch) codebase. Each task asks a model
to add a feature (or otherwise extend functionality) to Scorch. Success is
defined as the full pytest suite (original + any new tests the model adds)
passing after the patch is applied inside a Docker container.
This is the dataset artifact for the TensorBench paper (NeurIPS 2026
Evaluations & Datasets track, double-blind submission).
## At a glance
| | |
|---|---|
| Tasks | 199 (194 feature-addition, 5 refactor) |
| Target codebase | [`bobbyyyan/scorch`](https://github.com/bobbyyyan/scorch) — a sparse+dense PyTorch compiler |
| Base commits | 5 distinct SHAs across the `bench` branch |
| Language | Python (with C++ extension) |
| Test runner | `pytest -v` inside Docker |
## Files
| File | Purpose |
|---|---|
| `tensorbench.json` | Task list. JSON array of 199 task records. |
| `Dockerfile` | Eval image: `python:3.11-slim` + Scorch's C++ build deps + the upstream `bench` clone. |
| `run_tests.sh` | Container `CMD`: rebuilds the C++ extension and runs `pytest`. |
| `croissant.json` | Croissant metadata (core + RAI fields). |
## Task schema
Each record in `tensorbench.json` has:
| Field | Type | Description |
|---|---|---|
| `instance_id` | string | e.g. `bobbyyyan__scorch-feature_kernel_fusion`. The `feature_*` / `refactor_*` suffix follows the original taxonomy; semantics for both is feature-addition. |
| `repo_id` | string | `bobbyyyan__scorch` for every task. |
| `repo_url` | string | Upstream Scorch URL. |
| `base_commit` | string | The SHA the task is anchored at. The harness `git reset --hard`s to this before applying patches. |
| `language` | string | `python`. |
| `setup_commands` | list[string] | Container-side setup hooks. |
| `test_command` | string | `/testbed/run_tests.sh`. |
| `test_timeout` | int | Seconds. Default 3000. |
| `refactor_type` | string | Mostly empty (legacy field from the original taxonomy). |
| `description` | string | Natural-language task prompt the model receives. |
| `files` | list[string] | Files relevant to the task. |
| `task_type` | string | `feature` or `refactor`. |
| `categories` | list[string] | Topical tags (kernel, codegen, format, etc.). |
## Grading
Run with `pytest -v`; the grading strategy parses verbose lines (preserving
parametrized brackets like `[False]`) and uses a custom preservation rule:
```
success ⇔ after.failed == 0
```
This handles the common case where an agent adds new tests alongside new
code: if every test (original + agent-added) passes, the task is a success.
Default exact-match preservation would flag the new tests as "changed" and
produce false failures.
## How to evaluate a model
The full evaluation harness (`codebench`) and the project-local benchmark
glue (`tensorbench`) are in the supplementary code submission.
```bash
# 1. Install the harness and benchmark glue
git clone <tensorbench-repo> ~/tensorbench
git clone <codebench-repo> ~/codebench
pip install -e ~/codebench
pip install -r ~/tensorbench/requirements.txt
# 2. Build the eval image
cd ~/tensorbench
docker build -t scorch-eval -f dockerfiles/scorch/Dockerfile dockerfiles/scorch/
# 3. Generate predictions, then grade them
python run.py predict scorch sonnet-4.5
python run.py eval scorch sonnet-4.5
```
Predictions land in `predictions/`; eval outputs in `evaluation_results/<run_id>/`.
## Anonymity
This is a double-blind submission. Personally identifying information has
been scrubbed from the supplementary code. Reviewers should not attempt to
identify the authors via the target codebase URL or other public references.
## License
MIT for the benchmark scaffolding (this dataset, Dockerfile, grading
strategy). The target Scorch codebase is governed by its own upstream
license.
## Citation
(Anonymized for double-blind review. Citation will be added at camera-ready.)
|