File size: 3,323 Bytes
22c5932 a1fe92e 22c5932 a1fe92e 22c5932 854c490 a1fe92e 22c5932 854c490 22c5932 854c490 22c5932 854c490 22c5932 854c490 7736d47 22c5932 f695983 22c5932 854c490 22c5932 87cf800 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 | ---
license: cc-by-nd-4.0
dataset_info:
features:
- name: id
dtype: string
- name: prob_zh
dtype: string
- name: prob_en
dtype: string
- name: algorithm_tag_zh
dtype: string
- name: algorithm_tag_en
dtype: string
- name: level
dtype: string
- name: canonical_solution
dtype: string
- name: test_case
list:
- name: input
dtype: string
- name: output
dtype: string
- name: pseudo_code
dtype: string
- name: buggy_code
dtype: string
- name: corrupted_code
dtype: string
splits:
- name: test
num_bytes: 7818636649
num_examples: 250
download_size: 5518873050
dataset_size: 7818636649
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
---
# OIBench Dataset
## Dataset Overview
[OIBench](https://arxiv.org/abs/2506.10481) is a high-quality, private, and challenging olympiad-level informatics benchmark consisting of 250 carefully curated original problems.
The **OIBench Dataset**'s HuggingFace repo contains algorithm problem statements, solutions, and associated metadata such as test cases, pseudo code, and difficulty levels. The dataset has been processed and stored in Parquet format for efficient access and analysis.
We provide complete information for the 250 questions in the data (use `dataset = load_dataset("AGI-Eval/OIBench")` to access, as the test cases are large and the default Dataset Viewer on Hugging Face may not fully display the information).
We provide the competition records of human participants in `human_participants_data.parquet`. For detailed usage, refer to https://github.com/AGI-Eval-Official/OIBench
## Dataset Structure
The dataset includes the following fields:
- **`id`**: Problem ID (e.g., `000`, `001`, ..., `249`)
- **`prob_zh`**: Problem description in Chinese
- **`prob_en`**: Problem description in English
- **`algorithm_tag_zh`**: Algorithm tags in Chinese
- **`algorithm_tag_en`**: Algorithm tags in English
- **`level`**: Problem difficulty
- **`canonical_solution`**: Official solution code in C++
- **`test_case`**: List of test cases, each containing `input` and `output`.
- Each test case is structured as a list of objects containing:
- `input`: The input for the test case
- `output`: The output for the test case
- **`pseudo_code`**: Pseudo code for the algorithm
- **`buggy_code`**: Buggy code for the problem
- **`corrupted_code`**: Incomplete code for the problem
## Usage
You can load the dataset in your Python code using the following example:
```python
from datasets import load_dataset
dataset = load_dataset("AGI-Eval/OIBench")
print(dataset)
```
For more usage details, refer to our GitHub Repo: https://github.com/AGI-Eval-Official/OIBench
## Citation
```
@misc{zhu2025oibenchbenchmarkingstrongreasoning,
title={OIBench: Benchmarking Strong Reasoning Models with Olympiad in Informatics},
author={Yaoming Zhu and Junxin Wang and Yiyang Li and Lin Qiu and ZongYu Wang and Jun Xu and Xuezhi Cao and Yuhuai Wei and Mingshi Wang and Xunliang Cai and Rong Ma},
year={2025},
eprint={2506.10481},
archivePrefix={arXiv},
primaryClass={cs.AI},
url={https://arxiv.org/abs/2506.10481},
}
```
Corresponding Author: Lin Qiu ( qiulin07@meituan.com )
|