Datasets:
id
int64 | problem_id
string | problem_desc
string | time_limit_ms
int64 | memory_limit_MB
int64 | checker
string | test_cases
list |
|---|---|---|---|---|---|---|
1
|
codeforces/2020A
| "You are given two integers $ n $ and $ k $ .\n\nIn one operation, you can subtract any power of $ k(...TRUNCATED)
| 1,000
| 256
|
ncmp
| [{"input":"6\n5 2\n3 5\n16 4\n100 3\n6492 10\n10 1\n","output":"2\n3\n1\n4\n21\n10\n"},{"input":"100(...TRUNCATED)
|
2
|
codeforces/2020B
| "Imagine you have $ n $ light bulbs numbered $ 1, 2, \\ldots, n $ . Initially, all bulbs are on. To (...TRUNCATED)
| 1,000
| 256
|
ncmp
| [{"input":"3\n1\n3\n8\n","output":"2\n5\n11\n"},{"input":"10000\n1\n2\n3\n4\n5\n6\n7\n8\n9\n10\n11\n(...TRUNCATED)
|
3
|
codeforces/2020D
| "One fine evening, Alice sat down to play the classic game \"Connect the Dots\", but with a twist.\n(...TRUNCATED)
| 2,000
| 256
|
ncmp
| [{"input":"3\n10 2\n1 2 4\n2 2 4\n100 1\n19 2 4\n100 3\n1 2 5\n7 2 6\n17 2 31\n","output":"2\n96\n61(...TRUNCATED)
|
4
|
codeforces/2020E
| "You are given an array of $ n $ integers $ a_1,a_2,\\ldots,a_n $ . You are also given an array $ p_(...TRUNCATED)
| 4,000
| 256
|
ncmp
| [{"input":"4\n2\n1 2\n5000 5000\n2\n1 1\n1000 2000\n6\n343 624 675 451 902 820\n6536 5326 7648 2165 (...TRUNCATED)
|
5
|
codeforces/2020F
| "Let $ n $ and $ d $ be positive integers. We build the the divisor tree $ T_{n,d} $ as follows:\n\n(...TRUNCATED)
| 4,000
| 256
|
ncmp
| [{"input":"3\n6 1 1\n1 3 3\n10 1 2\n","output":"14\n1\n53\n"},{"input":"2\n1 1 1\n437300461 96625 34(...TRUNCATED)
|
6
|
codeforces/2024A
| "Alice has $ a $ coins. She can open a bank deposit called \"Profitable\", but the minimum amount re(...TRUNCATED)
| 1,000
| 256
|
ncmp
| [{"input":"5\n10 5\n7 9\n5 100\n1 1\n1 2\n","output":"10\n5\n0\n1\n0\n"},{"input":"10000\n1 1\n1 2\n(...TRUNCATED)
|
7
|
codeforces/2024B
| "There is a vending machine that sells lemonade. The machine has a total of $ n $ slots. You know th(...TRUNCATED)
| 1,000
| 256
|
ncmp
| [{"input":"5\n2 1\n1 1\n2 2\n1 2\n3 4\n2 1 3\n10 50\n1 1 3 8 8 9 12 13 27 27\n2 1000000000\n10000000(...TRUNCATED)
|
8
|
codeforces/2024D
| "It is already the year $ 3024 $ , ideas for problems have long run out, and the olympiad now takes (...TRUNCATED)
| 2,000
| 256
|
ncmp
| [{"input":"4\n2\n15 16\n2 1\n5\n10 10 100 100 1000\n3 4 1 1 1\n3\n100 49 50\n3 2 2\n4\n100 200 300 1(...TRUNCATED)
|
9
|
codeforces/2024F
| "Recently, you received a rare ticket to the only casino in the world where you can actually earn so(...TRUNCATED)
| 2,000
| 256
|
rcmp6
| [{"input":"3\n80 80\n70 100\n50 200\n","output":"112.0000000000\n"},{"input":"2\n100 1\n100 1\n","ou(...TRUNCATED)
|
10
|
codeforces/2028A
| "Alice is trying to meet up with the Red Queen in the countryside! Right now, Alice is at position $(...TRUNCATED)
| 1,000
| 256
|
nyesno
| [{"input":"7\n2 2 2\nNE\n3 2 2\nNNE\n6 2 1\nNNEESW\n6 10 10\nNNEESW\n3 4 2\nNEE\n4 5 5\nNEWS\n10 1 1(...TRUNCATED)
|
CF-Div2-Stepfun Evaluation Benchmark
Offline benchmark of 53 Div.2 CodeForces problems.
Introduction
We introduce CF-Div2-Stepfun, a dataset curated to benchmark the competitive programming capabilities of Large Language Models (LLMs). We evaluate our proprietary Step 3.5 Flash alongside several frontier models on this benchmark.
The benchmark comprises 53 problems sourced from official CodeForces Division 2 contests held between September 2024 and February 2025. We develop an offline evaluation framework that utilizes a local grading mechanism as an alternative to real-time online submissions.
The generated test cases consist of:
- Small-scale test cases, for initial functional verification.
- Randomized large-scale data, for performance and complexity verification.
- Handcrafted edge cases, derived from common error patterns and "hacked" submissions from real contest participants.
- Automated stress testing data, generated by stress testing technique, which keeps generating test cases until one can distinguish failed submissions from correct submissions.
To validate the reliability of this benchmark, we run both correct and representative failed submissions from the original contests. Our evaluator correctly identifies 100% of the accepted submissions as "Passed," while 92.45% of the failed submissions are accurately flagged.
Quickstart
from datasets import load_dataset
dataset = load_dataset("stepfun-ai/CF-Div2-Stepfun", name="release_v1", split="test")
Evaluation Details
The evaluation results are shown below.
| Model | C++ (avg@8) | Python (avg@8) | Java (avg@8) | C++(pass@8 rating) |
|---|---|---|---|---|
| Step 3.5 Flash | 86.1% | 81.5% | 77.1% | 2489 |
| Gemini 3.0 Pro | 83.5% | 74.1% | 81.6% | 2397 |
| Deepseek V3.2 | 81.6% | 66.5% | 80.7% | 2319 |
| GLM-4.7 | 74.1% | 63.0% | 70.5% | 2156 |
| Claude Opus 4.5 | 72.2% | 68.4% | 68.9% | 2100 |
| Kimi K2-Thinking | 67.9% | 60.4% | 58.5% | 1976 |
| Minimax-M2.1 | 59.0% | 46.4% | 58.0% | 1869 |
| Mimo-V2 Flash | 46.9% | 43.6% | 39.6% | 1658 |
We use the following prompt for all model evaluations:
You are a coding expert. Given a competition-level coding problem, you need to write a {LANGUAGE} program to solve it. You may start by outlining your thought process. In the end, please provide the complete code in a code block enclosed with ``` ```.
{question}
The compilation and execution commands for C++, Python, Java are given below:
g++ -std=c++20 -fno-asm -fsanitize=bounds -fno-sanitize-recover=bounds –static -O2 -DONLINE_JUDGE -o code.exe code.cpp
./code.exe
python3 code.py
javac -J-Xmx544m {JAVA_CLASS_NAME}.java
java -XX:+UseSerialGC -Xmx544m -Xss64m -DONLINE_JUDGE {JAVA_CLASS_NAME}
For Python and Java evaluation, we use a double time limit.
The benchmark kits follow the testlib pipeline in validation and evaluation. There is a validator for each problem to check test case integrity, and a specific checker to verify output correctness.
The rating evaluation follows CodeELO methodology. For pass@8 metrics, we calculate the expected score across 8 tries for each problem, with a fail-penalty but no submission-time-penalty. While this approach deviates from empirical competitive scenarios and may result in ratings that are not directly comparable to human participants, it provides a standardized benchmark for consistent cross-model comparison.
License
We are releasing the benchmark under the Creative Commons Attribution 4.0 International (CC-BY-4.0) license.
Citation
Please cite Step-3.5-Flash.
- Downloads last month
- -