metadata
license: mit
VeriSoftBench
VeriSoftBench is a benchmark for evaluating neural theorem provers on software verification tasks in Lean 4.
The dataset contains 500 theorem-proving tasks drawn from 23 real-world Lean 4 repositories spanning compiler verification, type system formalization, applied verification (zero-knowledge proofs, smart contracts), semantic frameworks, and more.
📄 Paper (arXiv): https://arxiv.org/html/2602.18307v1
💻 Full benchmark + pipeline + setup: https://github.com/utopia-group/VeriSoftBench
Dataset Contents
This Hugging Face release contains the dataset of the benchmark tasks only. For the full end-to-end evaluation pipeline, please refer to the Github repository:
👉 https://github.com/utopia-group/VeriSoftBench
Each task in verisoftbench.jsonl contains:
- Theorem name, statement, and source location
- Filtered dependencies (library defs, repo defs, local context, lemmas)
- Ground truth proof
- Metadata (category, difficulty metrics, Aristotle subset membership)
Citation
@misc{xin2026verisoftbenchrepositoryscaleformalverification,
title={VeriSoftBench: Repository-Scale Formal Verification Benchmarks for Lean},
author={Yutong Xin and Qiaochu Chen and Greg Durrett and Işil Dillig},
year={2026},
eprint={2602.18307},
archivePrefix={arXiv},
primaryClass={cs.SE},
url={https://arxiv.org/abs/2602.18307},
}