CF-Div2-Stepfun / README.md
zhber's picture
add evaluation example
0825401 verified
metadata
configs:
  - config_name: release_v1
    data_files:
      - split: test
        path: release_v1/test_50375e15.parquet
    dataset_info:
      features:
        - name: id
          dtype: int64
        - name: problem_id
          dtype: string
        - name: problem_desc
          dtype: string
        - name: time_limit_ms
          dtype: int64
        - name: memory_limit_MB
          dtype: int64
        - name: checker
          dtype: string
        - name: test_cases
          list:
            - name: input
              dtype: string
            - name: output
              dtype: string
license: cc-by-4.0
language:
  - en
tags:
  - benchmark
  - competitive-programming
task_categories:
  - text-generation

CF-Div2-Stepfun Evaluation Benchmark

Offline benchmark of 53 Div.2 CodeForces problems.

Introduction

We introduce CF-Div2-Stepfun, a dataset curated to benchmark the competitive programming capabilities of Large Language Models (LLMs). We evaluate our proprietary Step 3.5 Flash alongside several frontier models on this benchmark.

The benchmark comprises 53 problems sourced from official CodeForces Division 2 contests held between September 2024 and February 2025. We develop an offline evaluation framework that utilizes a local grading mechanism as an alternative to real-time online submissions.

The generated test cases consist of:

  • Small-scale test cases, for initial functional verification.
  • Randomized large-scale data, for performance and complexity verification.
  • Handcrafted edge cases, derived from common error patterns and "hacked" submissions from real contest participants.
  • Automated stress testing data, generated by stress testing technique, which keeps generating test cases until one can distinguish failed submissions from correct submissions.

To validate the reliability of this benchmark, we run both correct and representative failed submissions from the original contests. Our evaluator correctly identifies 100% of the accepted submissions as "Passed," while 92.45% of the failed submissions are accurately flagged.

Quickstart

from datasets import load_dataset
from pathlib import Path
import os

dataset = load_dataset("stepfun-ai/CF-Div2-Stepfun", name="release_v1", split="test")

# An evaluation example is given below for problem id=1:
# make sure you have prepared necessary checkers first
# here the checker used is "ncmp" as an example, for other problems inspect the marked checker name
# > git clone https://github.com/MikeMirzayanov/testlib.git
# > g++ -std=c++20 -Wall -Wextra --static -I testlib/ testlib/checkers/ncmp.cpp -o ncmp
assert dataset[0]["problem_id"] == "codeforces/2020A"
assert dataset[0]["checker"] == "ncmp"

# prepare a submission code
submission_code = \
r"""
#include <bits/stdc++.h>
using namespace std;
 
int find_min_oper(int n, int k){
    if(k == 1) return n;
    int ans = 0;
    while(n){
        ans += n%k;
        n /= k;
    }
    return ans;
}

int main()
{
    int t;
    cin >> t;
    while(t--){
        int n,k;
        cin >> n >> k;
        cout << find_min_oper(n,k) << "\n";
    }
    return 0;
}
"""

eval_dir = Path("./eval_test")
eval_dir.mkdir(exist_ok=True)
with open(eval_dir / "code.cpp", "w") as fout:
    fout.write(submission_code)

# run compilation
ret = os.system(f"g++ -std=c++20 -fno-asm -fsanitize=bounds -fno-sanitize-recover=bounds -static -O2 -DONLINE_JUDGE -o {eval_dir / 'code.exe'} {eval_dir / 'code.cpp'}")
assert ret == 0

for idx, test_case in enumerate(dataset[0]["test_cases"]):

    with open(eval_dir / f"{idx}.in", "w") as fout:
        fout.write(test_case["input"])
    with open(eval_dir / f"{idx}.ans", "w") as fout:
        fout.write(test_case["output"])

    # run code
    # apply more time / space constraints if you like
    ret = os.system(f"{eval_dir / 'code.exe'} < {eval_dir}/{idx}.in > {eval_dir}/{idx}.out")
    assert ret == 0

    # run checker
    ret = os.system(f"./ncmp {eval_dir}/{idx}.in {eval_dir}/{idx}.out {eval_dir}/{idx}.ans")
    assert ret == 0

Evaluation Details

The evaluation results are shown below.

Model C++ (avg@8) Python (avg@8) Java (avg@8) C++(pass@8 rating)
Step 3.5 Flash 86.1% 81.5% 77.1% 2489
Gemini 3.0 Pro 83.5% 74.1% 81.6% 2397
Deepseek V3.2 81.6% 66.5% 80.7% 2319
GLM-4.7 74.1% 63.0% 70.5% 2156
Claude Opus 4.5 72.2% 68.4% 68.9% 2100
Kimi K2-Thinking 67.9% 60.4% 58.5% 1976
Minimax-M2.1 59.0% 46.4% 58.0% 1869
Mimo-V2 Flash 46.9% 43.6% 39.6% 1658

We use the following prompt for all model evaluations:

You are a coding expert. Given a competition-level coding problem, you need to write a {LANGUAGE} program to solve it. You may start by outlining your thought process. In the end, please provide the complete code in a code block enclosed with ``` ```.
{question}

The compilation and execution commands for C++, Python, Java are given below:

g++ -std=c++20 -fno-asm -fsanitize=bounds -fno-sanitize-recover=bounds -static -O2 -DONLINE_JUDGE -o code.exe code.cpp
./code.exe
python3 code.py
javac -J-Xmx544m {JAVA_CLASS_NAME}.java
java -XX:+UseSerialGC -Xmx544m -Xss64m -DONLINE_JUDGE {JAVA_CLASS_NAME}

For Python and Java evaluation, we use a double time limit.

The benchmark kits follow the testlib pipeline in validation and evaluation. There is a validator for each problem to check test case integrity, and a specific checker to verify output correctness.

The rating evaluation follows CodeELO methodology. For pass@8 metrics, we calculate the expected score across 8 tries for each problem, with a fail-penalty but no submission-time-penalty. While this approach deviates from empirical competitive scenarios and may result in ratings that are not directly comparable to human participants, it provides a standardized benchmark for consistent cross-model comparison.

License

We are releasing the benchmark under the Creative Commons Attribution 4.0 International (CC-BY-4.0) license.

Citation

Please cite Step-3.5-Flash.