Improve dataset card: Add paper/project links, task categories, tags, description, and sample usage
#2
by
nielsr
HF Staff
- opened
README.md
CHANGED
|
@@ -1,5 +1,14 @@
|
|
| 1 |
---
|
| 2 |
license: apache-2.0
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 3 |
dataset_info:
|
| 4 |
features:
|
| 5 |
- name: id
|
|
@@ -61,4 +70,89 @@ configs:
|
|
| 61 |
|
| 62 |
# Verina: Benchmarking Verifiable Code Generation
|
| 63 |
|
| 64 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
---
|
| 2 |
license: apache-2.0
|
| 3 |
+
language:
|
| 4 |
+
- en
|
| 5 |
+
task_categories:
|
| 6 |
+
- code-generation
|
| 7 |
+
tags:
|
| 8 |
+
- lean
|
| 9 |
+
- formal-verification
|
| 10 |
+
- theorem-proving
|
| 11 |
+
- benchmark
|
| 12 |
dataset_info:
|
| 13 |
features:
|
| 14 |
- name: id
|
|
|
|
| 70 |
|
| 71 |
# Verina: Benchmarking Verifiable Code Generation
|
| 72 |
|
| 73 |
+
The VERINA (Verifiable Code Generation Arena) dataset is a high-quality benchmark enabling a comprehensive and modular evaluation of code, specification, and proof generation as well as their compositions. It addresses a significant gap in evaluation by providing a holistic framework rather than focusing on individual components. Verina consists of 189 manually curated coding tasks in Lean, with detailed problem descriptions, reference implementations, formal specifications, and extensive test suites. The benchmark aims to catalyze progress in verifiable code generation by providing a rigorous and comprehensive evaluation platform.
|
| 74 |
+
|
| 75 |
+
**Paper:** [VERINA: Benchmarking Verifiable Code Generation](https://huggingface.co/papers/2505.23135)
|
| 76 |
+
**Project Page:** [https://verina.io](https://verina.io)
|
| 77 |
+
**Code:** [https://github.com/sunblaze-ucb/verina](https://github.com/sunblaze-ucb/verina)
|
| 78 |
+
|
| 79 |
+
## Dataset Structure
|
| 80 |
+
|
| 81 |
+
This Hugging Face dataset is an aggregated version of the benchmark data from the `datasets/verina` directory in the official GitHub repository. Each original datapoint in the benchmark is organized as a folder containing the following files:
|
| 82 |
+
|
| 83 |
+
- `task.json`: A JSON file describing the task, including the id, task signature, paths to necessary data files, and other metadata.
|
| 84 |
+
- `description.txt`: The natural language description of the programming task.
|
| 85 |
+
- `task.lean`: The Lean 4 file containing ground truth code, specification, and proof.
|
| 86 |
+
- `test.json` and `reject_inputs.json`: JSON files with test cases and rejected inputs for the task.
|
| 87 |
+
|
| 88 |
+
## Sample Usage
|
| 89 |
+
|
| 90 |
+
To utilize the Verina benchmark, you'll need `uv` and `lean` installed. The following snippets demonstrate how to set up the environment, run the benchmark, and interpret its results.
|
| 91 |
+
|
| 92 |
+
### Prerequisites
|
| 93 |
+
|
| 94 |
+
- [uv](https://docs.astral.sh/uv/getting-started/installation/)
|
| 95 |
+
- [lean](https://docs.lean-lang.org/lean4/doc/quickstart.html)
|
| 96 |
+
- docker (optional, for Prefect server)
|
| 97 |
+
|
| 98 |
+
### Setup
|
| 99 |
+
|
| 100 |
+
```bash
|
| 101 |
+
uv sync
|
| 102 |
+
source .venv/bin/activate # Activate the virtual environment created by uv
|
| 103 |
+
lake exe cache get
|
| 104 |
+
lake update
|
| 105 |
+
```
|
| 106 |
+
|
| 107 |
+
### Running Benchmarks on Baselines
|
| 108 |
+
|
| 109 |
+
First, start the Prefect server (Docker is optional; a local PostgreSQL or SQLite can be used):
|
| 110 |
+
|
| 111 |
+
```bash
|
| 112 |
+
docker compose up -d # This will start the database for prefect in the background
|
| 113 |
+
uv run prefect server start
|
| 114 |
+
```
|
| 115 |
+
|
| 116 |
+
Then, run the benchmark using a configuration file (e.g., `configs/[config_name].toml`):
|
| 117 |
+
|
| 118 |
+
```bash
|
| 119 |
+
PREFECT_API_URL=http://127.0.0.1:4200/api uv run scripts/benchmark.py -c configs/[config_name].toml
|
| 120 |
+
```
|
| 121 |
+
|
| 122 |
+
You can also separate generation and evaluation steps for faster execution:
|
| 123 |
+
|
| 124 |
+
```bash
|
| 125 |
+
# For generation only
|
| 126 |
+
PREFECT_API_URL=http://127.0.0.1:4200/api uv run scripts/benchmark.py -c configs/<config_name>.toml --no-eval
|
| 127 |
+
|
| 128 |
+
# For evaluation only
|
| 129 |
+
PREFECT_API_URL=http://127.0.0.1:4200/api uv run scripts/benchmark.py -c configs/<config_name>.toml --no-gen -ew <evaluation_worker_num_override>
|
| 130 |
+
```
|
| 131 |
+
|
| 132 |
+
### Reading Results
|
| 133 |
+
|
| 134 |
+
The detailed results are saved in the `output_dir` specified in your configuration. You can obtain a summary using the following Python snippet:
|
| 135 |
+
|
| 136 |
+
```python
|
| 137 |
+
from pathlib import Path
|
| 138 |
+
from src.verina.benchmark.report import EvaluationRoundsReport
|
| 139 |
+
from src.verina.benchmark.summary import DatapointSummaryReport
|
| 140 |
+
|
| 141 |
+
output_dir = Path("<your_output_dir>") # Replace with your actual output directory
|
| 142 |
+
report = EvaluationRoundsReport.load_latest(output_dir)
|
| 143 |
+
summary = DatapointSummaryReport.from_rounds_report(report)
|
| 144 |
+
print(summary.pass_at_k(1))
|
| 145 |
+
```
|
| 146 |
+
|
| 147 |
+
## Citation
|
| 148 |
+
|
| 149 |
+
If you use Verina in your research, please cite the following paper:
|
| 150 |
+
|
| 151 |
+
```bibtex
|
| 152 |
+
@article{ye2025verina,
|
| 153 |
+
title={VERINA: Benchmarking Verifiable Code Generation},
|
| 154 |
+
author={Ye, Zhe and Yan, Zhengxu and He, Jingxuan and Kasriel, Timothe and Yang, Kaiyu and Song, Dawn},
|
| 155 |
+
journal={arXiv preprint arXiv:2505.23135},
|
| 156 |
+
year={2025}
|
| 157 |
+
}
|
| 158 |
+
```
|