Datasets:
File size: 6,249 Bytes
1fc8d73 5d8edbe 1fc8d73 22f2442 1fc8d73 df87a4b 1fc8d73 379e0f3 1fc8d73 680ceb9 1fc8d73 680ceb9 1fc8d73 b3608e3 1fc8d73 0825401 1fc8d73 0825401 1fc8d73 8afef3b 1fc8d73 0825401 1fc8d73 513f616 1fc8d73 513f616 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 |
---
configs:
- config_name: release_v1
data_files:
- split: test
path: release_v1/test_50375e15.parquet
dataset_info:
features:
- name: id
dtype: int64
- name: problem_id
dtype: string
- name: problem_desc
dtype: string
- name: time_limit_ms
dtype: int64
- name: memory_limit_MB
dtype: int64
- name: checker
dtype: string
- name: test_cases
list:
- name: input
dtype: string
- name: output
dtype: string
license: cc-by-4.0
language:
- en
tags:
- benchmark
- competitive-programming
task_categories:
- text-generation
---
# CF-Div2-Stepfun Evaluation Benchmark
Offline benchmark of 53 Div.2 CodeForces problems.
## Introduction
We introduce **CF-Div2-Stepfun**, a dataset curated to benchmark the competitive programming capabilities of Large Language Models (LLMs). We evaluate our proprietary [**Step 3.5 Flash**](https://huggingface.co/stepfun-ai/Step-3.5-Flash) alongside several frontier models on this benchmark.
The benchmark comprises 53 problems sourced from official [CodeForces](https://codeforces.com/) Division 2 contests held between September 2024 and February 2025. We develop an offline evaluation framework that utilizes a local grading mechanism as an alternative to real-time online submissions.
The generated test cases consist of:
- Small-scale test cases, for initial functional verification.
- Randomized large-scale data, for performance and complexity verification.
- Handcrafted edge cases, derived from common error patterns and "hacked" submissions from real contest participants.
- Automated stress testing data, generated by stress testing technique, which keeps generating test cases until one can distinguish failed submissions from correct submissions.
To validate the reliability of this benchmark, we run both correct and representative failed submissions from the original contests. Our evaluator correctly identifies 100% of the accepted submissions as "Passed," while 92.45% of the failed submissions are accurately flagged.
## Quickstart
```python
from datasets import load_dataset
from pathlib import Path
import os
dataset = load_dataset("stepfun-ai/CF-Div2-Stepfun", name="release_v1", split="test")
# An evaluation example is given below for problem id=1:
# make sure you have prepared necessary checkers first
# here the checker used is "ncmp" as an example, for other problems inspect the marked checker name
# > git clone https://github.com/MikeMirzayanov/testlib.git
# > g++ -std=c++20 -Wall -Wextra --static -I testlib/ testlib/checkers/ncmp.cpp -o ncmp
assert dataset[0]["problem_id"] == "codeforces/2020A"
assert dataset[0]["checker"] == "ncmp"
# prepare a submission code
submission_code = \
r"""
#include <bits/stdc++.h>
using namespace std;
int find_min_oper(int n, int k){
if(k == 1) return n;
int ans = 0;
while(n){
ans += n%k;
n /= k;
}
return ans;
}
int main()
{
int t;
cin >> t;
while(t--){
int n,k;
cin >> n >> k;
cout << find_min_oper(n,k) << "\n";
}
return 0;
}
"""
eval_dir = Path("./eval_test")
eval_dir.mkdir(exist_ok=True)
with open(eval_dir / "code.cpp", "w") as fout:
fout.write(submission_code)
# run compilation
ret = os.system(f"g++ -std=c++20 -fno-asm -fsanitize=bounds -fno-sanitize-recover=bounds -static -O2 -DONLINE_JUDGE -o {eval_dir / 'code.exe'} {eval_dir / 'code.cpp'}")
assert ret == 0
for idx, test_case in enumerate(dataset[0]["test_cases"]):
with open(eval_dir / f"{idx}.in", "w") as fout:
fout.write(test_case["input"])
with open(eval_dir / f"{idx}.ans", "w") as fout:
fout.write(test_case["output"])
# run code
# apply more time / space constraints if you like
ret = os.system(f"{eval_dir / 'code.exe'} < {eval_dir}/{idx}.in > {eval_dir}/{idx}.out")
assert ret == 0
# run checker
ret = os.system(f"./ncmp {eval_dir}/{idx}.in {eval_dir}/{idx}.out {eval_dir}/{idx}.ans")
assert ret == 0
```
## Evaluation Details
The evaluation results are shown below.
| Model | C++ (avg@8) | Python (avg@8) | Java (avg@8) | C++(pass@8 rating) |
| - | - | - | - | - |
| Step 3.5 Flash | **86.1%** | **81.5%** | 77.1% | **2489** |
| Gemini 3.0 Pro | 83.5% | 74.1% | **81.6%** | 2397 |
| Deepseek V3.2 | 81.6% | 66.5% | 80.7% | 2319 |
| GLM-4.7 | 74.1% | 63.0% | 70.5% | 2156 |
| Claude Opus 4.5 | 72.2% | 68.4% | 68.9% | 2100 |
| Kimi K2-Thinking | 67.9% | 60.4% | 58.5% | 1976 |
| Minimax-M2.1 | 59.0% | 46.4% | 58.0% | 1869 |
| Mimo-V2 Flash | 46.9% | 43.6% | 39.6% | 1658 |
We use the following prompt for all model evaluations:
```
You are a coding expert. Given a competition-level coding problem, you need to write a {LANGUAGE} program to solve it. You may start by outlining your thought process. In the end, please provide the complete code in a code block enclosed with ``` ```.
{question}
```
The compilation and execution commands for C++, Python, Java are given below:
```
g++ -std=c++20 -fno-asm -fsanitize=bounds -fno-sanitize-recover=bounds -static -O2 -DONLINE_JUDGE -o code.exe code.cpp
./code.exe
```
```
python3 code.py
```
```
javac -J-Xmx544m {JAVA_CLASS_NAME}.java
java -XX:+UseSerialGC -Xmx544m -Xss64m -DONLINE_JUDGE {JAVA_CLASS_NAME}
```
For Python and Java evaluation, we use a double time limit.
The benchmark kits follow the [testlib](https://github.com/MikeMirzayanov/testlib) pipeline in validation and evaluation. There is a validator for each problem to check test case integrity, and a specific checker to verify output correctness.
The rating evaluation follows [CodeELO](https://github.com/QwenLM/CodeElo) methodology. For pass@8 metrics, we calculate the expected score across 8 tries for each problem, with a fail-penalty but no submission-time-penalty. While this approach deviates from empirical competitive scenarios and may result in ratings that are not directly comparable to human participants, it provides a standardized benchmark for consistent cross-model comparison.
## License
We are releasing the benchmark under the Creative Commons Attribution 4.0 International (CC-BY-4.0) license.
## Citation
Please cite [Step-3.5-Flash](https://github.com/stepfun-ai/Step-3.5-Flash).
|