Datasets:
metadata
license: apache-2.0
task_categories:
- text-generation
language:
- en
tags:
- lean4
- theorem-proving
- code-generation
- benchmark
size_categories:
- n<1K
configs:
- config_name: default
data_files:
- split: train
path: data/train-*.parquet
LeanBench Dataset
A benchmark dataset for evaluating AI systems on Lean 4 theorem proving tasks.
Dataset Description
This dataset contains 482 tasks extracted from real Lean 4 pull requests.
Files
leanbench_tasks.csv- Full dataset in CSV formatdata/train-00000-of-00001.parquet- Dataset in Parquet format (fordatasetslibrary)
Task Format
Each row represents a single task with the following key fields:
| Field | Description |
|---|---|
task_id |
Unique identifier (e.g., LB-0001) |
task_type |
Type of task (e.g., pr_completion) |
difficulty |
Difficulty level (easy/medium/hard) |
difficulty_score |
Numeric difficulty score |
repo |
Source GitHub repository |
pr_number |
Pull request number |
problem_statement |
Natural language description of the task |
golden_patch |
Expected solution (diff format) |
verification_command |
Command to verify the solution |
Usage
from datasets import load_dataset
dataset = load_dataset("foundry-ai/leanbench")
# Access tasks
for task in dataset["train"]:
print(task["task_id"], task["difficulty"])
Statistics
- Total tasks: 482
- Easy: 330
- Medium: 121
- Hard: 31
License
Apache 2.0