verina / README.md
nielsr's picture
nielsr HF Staff
Improve dataset card: Add paper/project links, task categories, tags, description, and sample usage
9509ed0 verified
|
raw
history blame
5.15 kB
metadata
license: apache-2.0
language:
  - en
task_categories:
  - code-generation
tags:
  - lean
  - formal-verification
  - theorem-proving
  - benchmark
dataset_info:
  features:
    - name: id
      dtype: string
    - name: description
      dtype: string
    - name: lean_code
      dtype: string
    - name: signature
      struct:
        - name: name
          dtype: string
        - name: parameters
          sequence:
            - name: param_name
              dtype: string
            - name: param_type
              dtype: string
        - name: return_type
          dtype: string
    - name: metadata
      struct:
        - name: upstream
          struct:
            - name: name
              dtype: string
            - name: link
              dtype: string
            - name: task_id
              dtype: string
            - name: student_id
              sequence: int64
    - name: tests
      sequence:
        - name: input
          dtype: string
        - name: expected
          sequence: string
        - name: unexpected
          sequence: string
    - name: reject_inputs
      sequence:
        - name: input
          dtype: string
    - name: difficulty
      dtype: string
  splits:
    - name: train
      num_bytes: 609882
      num_examples: 189
  download_size: 211743
  dataset_size: 609882
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*

Verina: Benchmarking Verifiable Code Generation

The VERINA (Verifiable Code Generation Arena) dataset is a high-quality benchmark enabling a comprehensive and modular evaluation of code, specification, and proof generation as well as their compositions. It addresses a significant gap in evaluation by providing a holistic framework rather than focusing on individual components. Verina consists of 189 manually curated coding tasks in Lean, with detailed problem descriptions, reference implementations, formal specifications, and extensive test suites. The benchmark aims to catalyze progress in verifiable code generation by providing a rigorous and comprehensive evaluation platform.

Paper: VERINA: Benchmarking Verifiable Code Generation Project Page: https://verina.io Code: https://github.com/sunblaze-ucb/verina

Dataset Structure

This Hugging Face dataset is an aggregated version of the benchmark data from the datasets/verina directory in the official GitHub repository. Each original datapoint in the benchmark is organized as a folder containing the following files:

  • task.json: A JSON file describing the task, including the id, task signature, paths to necessary data files, and other metadata.
  • description.txt: The natural language description of the programming task.
  • task.lean: The Lean 4 file containing ground truth code, specification, and proof.
  • test.json and reject_inputs.json: JSON files with test cases and rejected inputs for the task.

Sample Usage

To utilize the Verina benchmark, you'll need uv and lean installed. The following snippets demonstrate how to set up the environment, run the benchmark, and interpret its results.

Prerequisites

  • uv
  • lean
  • docker (optional, for Prefect server)

Setup

uv sync
source .venv/bin/activate  # Activate the virtual environment created by uv
lake exe cache get
lake update

Running Benchmarks on Baselines

First, start the Prefect server (Docker is optional; a local PostgreSQL or SQLite can be used):

docker compose up -d # This will start the database for prefect in the background
uv run prefect server start

Then, run the benchmark using a configuration file (e.g., configs/[config_name].toml):

PREFECT_API_URL=http://127.0.0.1:4200/api uv run scripts/benchmark.py -c configs/[config_name].toml

You can also separate generation and evaluation steps for faster execution:

# For generation only
PREFECT_API_URL=http://127.0.0.1:4200/api uv run scripts/benchmark.py -c configs/<config_name>.toml --no-eval

# For evaluation only
PREFECT_API_URL=http://127.0.0.1:4200/api uv run scripts/benchmark.py -c configs/<config_name>.toml --no-gen -ew <evaluation_worker_num_override>

Reading Results

The detailed results are saved in the output_dir specified in your configuration. You can obtain a summary using the following Python snippet:

from pathlib import Path
from src.verina.benchmark.report import EvaluationRoundsReport
from src.verina.benchmark.summary import DatapointSummaryReport

output_dir = Path("<your_output_dir>") # Replace with your actual output directory
report = EvaluationRoundsReport.load_latest(output_dir)
summary = DatapointSummaryReport.from_rounds_report(report)
print(summary.pass_at_k(1))

Citation

If you use Verina in your research, please cite the following paper:

@article{ye2025verina,
  title={VERINA: Benchmarking Verifiable Code Generation},
  author={Ye, Zhe and Yan, Zhengxu and He, Jingxuan and Kasriel, Timothe and Yang, Kaiyu and Song, Dawn},
  journal={arXiv preprint arXiv:2505.23135},
  year={2025}
}