Datasets:

Modalities:
Text
Formats:
parquet
Size:
< 1K
ArXiv:
Libraries:
Datasets
Dask
License:
OIBench / README.md
Milo0007's picture
Update README.md
7736d47 verified
metadata
license: cc-by-nd-4.0
dataset_info:
  features:
    - name: id
      dtype: string
    - name: prob_zh
      dtype: string
    - name: prob_en
      dtype: string
    - name: algorithm_tag_zh
      dtype: string
    - name: algorithm_tag_en
      dtype: string
    - name: level
      dtype: string
    - name: canonical_solution
      dtype: string
    - name: test_case
      list:
        - name: input
          dtype: string
        - name: output
          dtype: string
    - name: pseudo_code
      dtype: string
    - name: buggy_code
      dtype: string
    - name: corrupted_code
      dtype: string
  splits:
    - name: test
      num_bytes: 7818636649
      num_examples: 250
  download_size: 5518873050
  dataset_size: 7818636649
configs:
  - config_name: default
    data_files:
      - split: test
        path: data/test-*

OIBench Dataset

Dataset Overview

OIBench is a high-quality, private, and challenging olympiad-level informatics benchmark consisting of 250 carefully curated original problems.

The OIBench Dataset's HuggingFace repo contains algorithm problem statements, solutions, and associated metadata such as test cases, pseudo code, and difficulty levels. The dataset has been processed and stored in Parquet format for efficient access and analysis.

We provide complete information for the 250 questions in the data (use dataset = load_dataset("AGI-Eval/OIBench") to access, as the test cases are large and the default Dataset Viewer on Hugging Face may not fully display the information).

We provide the competition records of human participants in human_participants_data.parquet. For detailed usage, refer to https://github.com/AGI-Eval-Official/OIBench

Dataset Structure

The dataset includes the following fields:

  • id: Problem ID (e.g., 000, 001, ..., 249)
  • prob_zh: Problem description in Chinese
  • prob_en: Problem description in English
  • algorithm_tag_zh: Algorithm tags in Chinese
  • algorithm_tag_en: Algorithm tags in English
  • level: Problem difficulty
  • canonical_solution: Official solution code in C++
  • test_case: List of test cases, each containing input and output.
    • Each test case is structured as a list of objects containing:
      • input: The input for the test case
      • output: The output for the test case
  • pseudo_code: Pseudo code for the algorithm
  • buggy_code: Buggy code for the problem
  • corrupted_code: Incomplete code for the problem

Usage

You can load the dataset in your Python code using the following example:

from datasets import load_dataset

dataset = load_dataset("AGI-Eval/OIBench")
print(dataset)

For more usage details, refer to our GitHub Repo: https://github.com/AGI-Eval-Official/OIBench

Citation

@misc{zhu2025oibenchbenchmarkingstrongreasoning,
      title={OIBench: Benchmarking Strong Reasoning Models with Olympiad in Informatics}, 
      author={Yaoming Zhu and Junxin Wang and Yiyang Li and Lin Qiu and ZongYu Wang and Jun Xu and Xuezhi Cao and Yuhuai Wei and Mingshi Wang and Xunliang Cai and Rong Ma},
      year={2025},
      eprint={2506.10481},
      archivePrefix={arXiv},
      primaryClass={cs.AI},
      url={https://arxiv.org/abs/2506.10481}, 
}

Corresponding Author: Lin Qiu ( qiulin07@meituan.com )