File size: 9,514 Bytes
ae44ed1 7680b70 6d569c6 7680b70 6d569c6 7680b70 6d569c6 7680b70 6d569c6 ae44ed1 7680b70 6d569c6 a6ddb52 7680b70 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 |
---
license: mit
task_categories:
- text-generation
- question-answering
language:
- en
tags:
- competitive-programming
- algorithms
- icpc
- code-generation
- programming-challenges
- llm-evaluation
- benchmark
size_categories:
- n<1K
---
# Idea First, Code Later: CP Benchmark
> Benchmark dataset for the paper: **"Idea First, Code Later: Disentangling Problem Solving from Code Generation in Evaluating LLMs for Competitive Programming"**
A curated benchmark of 83 competitive programming problems designed for evaluating LLMs on algorithmic problem-solving separately from code generation.
## Motivation
We curate problems from seven contests that are **not hosted on major public CP platforms** (e.g., Codeforces, AtCoder). While some problem statements may be publicly accessible (e.g., contest PDFs or course materials), we expect contamination risk to be lower than for widely scraped judge platforms.
The problems come from **regional ICPC contests** and **CS3233** (Competitive Programming course) examinations at the National University of Singapore, spanning **2017–2025**.
## What's Included
Each problem package includes:
- **Original problem statement** in Markdown format
- **Gold editorial** written by the problem setter or tester (solution analysis)
- **Full official test suite** (sample + hidden test cases)
We rely on complete judge test suites to avoid false positives in evaluation.
## Difficulty Grouping
To account for variation in problem difficulty, we group problems using **solve rates** (the proportion of teams that solved the problem) from official scoreboards. Within each contest, problems are ranked by solve rate and partitioned into three **contest-relative tertiles**:
- **T1** (easiest third)
- **T2** (middle third)
- **T3** (hardest third)
Pooling across contests yields approximately balanced difficulty groups while preserving each contest's intrinsic difficulty profile.
## Dataset Composition
| Contest | Year | Source | #Teams | #Problems |
|---------|------|--------|--------|-----------|
| CS3233 Midterm Contest | 2023 | NUS | 25 | 11 |
| CS3233 Midterm Contest | 2024 | NUS | 15 | 12 |
| CS3233 Midterm Contest | 2025 | NUS | 16 | 11 |
| ICPC Asia Pacific Championship | 2024 | GitHub | 65 | 13 |
| ICPC Asia Jakarta Regional | 2017 | GitHub | 80 | 12 |
| ICPC Asia Jakarta Regional | 2018 | GitHub | 75 | 12 |
| ICPC Asia Jakarta Regional | 2019 | GitHub | 80 | 12 |
| **Total** | -- | -- | -- | **83** |
## Dataset Description
This dataset contains competitive programming problems with:
- **Problem statements** in Markdown format
- **Test cases** (input/output pairs)
- **Solution analysis** (where available)
- **Contest metadata** (difficulty, solve rates, etc.)
## Sources
The problems are sourced from:
- **ICPC Asia Pacific Championship 2024**
- **ICPC Jakarta Regionals** (2017, 2018, 2019)
- **CS3233 NUS Midterm Contests** (2023, 2024, 2025)
## Copyright and Permissions
The **CS3233 portion** of the dataset consists of course assessment materials from the National University of Singapore. We obtained copyright permission from the course instructor to include and redistribute these materials (problem statements, gold editorials) as part of our dataset release.
The CS3233 gold editorials are **private course materials** that were not publicly released prior to this work.
## Dataset Structure
Each example contains:
### Identifiers
| Field | Type | Description |
|-------|------|-------------|
| `problem_id` | string | Unique identifier (contest_slug) |
| `problem_code` | string | Problem letter (A, B, C, ...) |
| `problem_slug` | string | URL-friendly problem name |
| `problem_title` | string | Full problem title |
### Contest Information
| Field | Type | Description |
|-------|------|-------------|
| `contest_name` | string | Contest identifier |
| `contest_full_name` | string | Full contest name |
| `year` | string | Competition year |
| `source` | string | Source URL/repository |
| `total_teams` | int | Total teams in contest |
| `total_problems` | int | Total problems in contest |
### Problem Details
| Field | Type | Description |
|-------|------|-------------|
| `statement` | string | Problem statement in Markdown |
| `analysis` | string | Editorial/solution analysis |
| `time_limit` | string | Time limit for solutions |
| `memory_limit` | string | Memory limit |
| `author` | string | Problem author |
| `analysis_author` | string | Editorial author |
### Test Cases
| Field | Type | Description |
|-------|------|-------------|
| `sample_test_cases_input` | list[string] | Sample inputs (public) |
| `sample_test_cases_output` | list[string] | Sample outputs (public) |
| `hidden_test_cases_input` | list[string] | Hidden inputs (judging) |
| `hidden_test_cases_output` | list[string] | Hidden outputs (judging) |
| `has_special_judge` | bool | True if problem accepts multiple correct outputs |
| `special_judge_code` | string | C++ scorer code (testlib) for validating outputs |
| `special_judge_format` | string | "standard" or "jakarta2017" |
| `uses_kattis` | bool | True for CS3233 (submit via Kattis) |
| `kattis_problem_id` | string | Kattis problem ID for submission |
| `contest_standings_csv` | string | Full contest scoreboard in CSV format |
| `scoreboard_url` | string | Original URL of the contest scoreboard |
#### Special Judge (Scorer)
Some problems accept **multiple correct outputs** for a given input (e.g., "output any valid permutation"). These problems have `has_special_judge=True` and include a C++ scorer in `special_judge_code` that validates whether a contestant's output is correct.
**standard** format (most contests):
```bash
./scorer <input_file> <judge_output_file> <contestant_output_file>
# Prints "AC" or "WA" to stdout
```
**jakarta2017** format (ICPC Jakarta 2017 only):
```bash
./scorer <input_file> <unused> <judge_output_file> < contestant_output
# Reads contestant output from stdin
# Prints "WA" if wrong, nothing if correct
```
### CS3233 Contests (Kattis Submission)
For CS3233 problems, hidden test cases are not publicly available (hosted on Kattis). Sample test cases are extracted from problem statements.
To submit solutions, set up [Kattis CLI](https://github.com/Kattis/kattis-cli) following the instructions in their repository, then run:
```bash
kattis <solution_file> -p <problem_id> -f
```
The `kattis_problem_id` field contains the Kattis problem ID for submission.
## Notes on Empty Fields
Some fields may be empty depending on the contest source:
| Field | When Empty | Reason |
|-------|------------|--------|
| `hidden_test_cases_*` | CS3233 contests | Test cases hosted on Kattis, not public |
| `author`, `analysis_author` | CS3233 contests | Author info not available |
| `special_judge_*` | Most problems | Only ~40% of problems need special judges |
| `kattis_problem_id` | ICPC contests | Only CS3233 uses Kattis |
| `memory_limit` | Some ICPC contests | Not always specified |
### Contest Statistics
| Field | Type | Description |
|-------|------|-------------|
| `teams_solved` | int | Number of teams that solved |
| `teams_wrong_answer` | int | Teams with wrong answers |
| `teams_unattempted` | int | Teams that didn't try |
| `teams_tried` | int | Teams that attempted |
| `solve_percentage` | float | % of teams that solved |
| `first_solve_time` | int | Time to first solve (minutes) |
| `average_solve_time` | float | Average solve time (minutes) |
| `total_attempts` | int | Total submission attempts |
| `average_attempts` | float | Average attempts per team |
| `Difficulty_Tertile` | int | Difficulty rank in contest |
## Usage
```python
from datasets import load_dataset
dataset = load_dataset("samahadhoud/idea-first-code-later-cp")
# Access a problem
problem = dataset["train"][0]
print(problem["problem_title"])
print(problem["statement"])
# Get test cases
for inp, out in zip(problem["sample_test_cases_input"], problem["sample_test_cases_output"]):
print(f"Sample Input: {inp[:100]}...")
print(f"Sample Output: {out[:100]}...")
# Hidden test cases for evaluation
print(f"Hidden test cases: {len(problem['hidden_test_cases_input'])}")
```
## Test Runner
Use the provided test runner to evaluate solutions:
```python
from hf_test_runner import TestRunner
runner = TestRunner()
# Run a C++ solution
results = runner.run_solution(
problem_id="icpc-jakarta-2017_icpc",
solution_code="""
#include <iostream>
using namespace std;
int main() {
int n;
cin >> n;
cout << n * 2 << endl;
return 0;
}
""",
language="cpp"
)
print(results["status"]) # "PASSED" or error type
```
The test runner automatically handles:
- **Sample and hidden test cases**
- **Special judges** (problems with multiple valid outputs)
- **Kattis submission** (for CS3233 problems via [kattis-cli](https://github.com/Kattis/kattis-cli))
- **Memory and time limits**
## Citation
If you use this dataset in your research, please cite the following paper:
```bibtex
@misc{hadhoud2026ideafirstcodelater,
title={Idea First, Code Later: Disentangling Problem Solving from Code Generation in Evaluating LLMs for Competitive Programming},
author={Sama Hadhoud and Alaa Elsetohy and Frederikus Hudi and Jan Christian Blaise Cruz and Steven Halim and Alham Fikri Aji},
year={2026},
eprint={2601.11332},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2601.11332}
}
```
## License
MIT License
|