File size: 8,230 Bytes
513f199
 
 
 
 
 
 
651b17f
 
5ded986
 
513f199
 
 
 
651b17f
 
5ded986
 
513f199
 
 
 
 
 
5ded986
513f199
5ded986
 
513f199
 
 
 
 
 
2746629
 
 
 
 
 
 
 
513f199
 
 
 
 
 
 
 
 
 
3da3c06
 
513f199
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3da3c06
 
513f199
 
 
 
 
 
 
 
 
3da3c06
 
513f199
 
 
 
 
 
 
 
 
3da3c06
513f199
 
 
 
 
3da3c06
513f199
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3da3c06
 
513f199
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3da3c06
513f199
 
3da3c06
513f199
 
 
 
 
 
3da3c06
 
 
 
513f199
 
 
 
 
3da3c06
 
 
 
 
 
 
 
 
 
 
 
 
 
 
513f199
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3da3c06
 
 
 
 
 
 
 
513f199
 
 
 
3da3c06
513f199
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
---
dataset_info:
  features:
  - name: data_source
    dtype: string
  - name: prompt
    list:
    - name: role
      dtype: string
    - name: content
      dtype: string
  - name: ability
    dtype: string
  - name: reward_model
    struct:
    - name: style
      dtype: string
    - name: ground_truth
      dtype: string
  - name: extra_info
    struct:
    - name: index
      dtype: int64
  splits:
  - name: train
    num_bytes: 3218296953
    num_examples: 25276
  download_size: 1652135331
  dataset_size: 3218296953
configs:
- config_name: default
  data_files:
  - split: train
    path: data/train-*
license: mit
task_categories:
- reinforcement-learning
- text-generation
tags:
- code
- reasoning
- rlhf
- verl
---

# Eurus-2-Code-RL (VERL Format)

This dataset contains **25,276** competitive programming problems from the Eurus-2-RL-Data dataset, filtered and converted to VERL format for reinforcement learning training workflows.

**Source**: [PRIME-RL/Eurus-2-RL-Data](https://huggingface.co/datasets/PRIME-RL/Eurus-2-RL-Data)

**License**: MIT

> **Note (Updated 2025-10-27)**: System prompts have been removed from all examples for better compatibility with other code datasets. The dataset now contains only user messages with the coding problems. See changelog for details.

## Dataset Description

Eurus-2-Code-RL is a curated collection of competitive programming problems specifically designed for training language models using reinforcement learning. The problems are sourced from various high-quality coding challenge platforms and include:
- CodeContests problems
- TACO (Text-Assisted Coding with Objectives) problems
- APPS (Automated Programming Progress Standard) problems
- Codeforces problems

## Dataset Structure

The dataset follows the VERL format with the following fields:

- `data_source` (string): Original source identifier (e.g., "taco", "codecontests", "apps", "codeforces")
- `prompt` (list): Chat template format with role/content structure
  - User message with the coding problem
- `ability` (string): Task category ("code")
- `reward_model` (dict): Evaluation information
  - `style`: Evaluation method ("rule" for test-based evaluation)
  - `ground_truth`: Test cases for evaluation
- `extra_info` (dict): Additional metadata
  - `split`: Data split ("train" or "dummy")
  - `index`: Example index

## Data Quality

**High-Quality Problems**:
-**Diverse sources** - Problems from competitive programming platforms
-**RL-focused** - Specifically designed for reinforcement learning training
-**Verified solutions** - Ground truth test cases for reward model evaluation
-**Compatible format** - Matches structure of other VERL code datasets

### Sample Problem

```python
{
  "data_source": "taco",
  "prompt": [
    {
      "role": "user",
      "content": "One tradition of ACM-ICPC contests is that a team gets a balloon for every solved problem. We assume that the submission time doesn't matter and teams are sorted only by the number of balloons they have. It means that one's place is equal to the number of teams with more balloons, increased by 1...\n\nWrite Python code to solve the problem. Present the code in \n```python\nYour code\n```\nat the end."
    }
  ],
  "ability": "code",
  "reward_model": {
    "style": "rule",
    "ground_truth": "{\"inputs\": [...], \"outputs\": [...]}"
  },
  "extra_info": {
    "split": "train",
    "index": 0
  }
}
```

## Usage

```python
from datasets import load_dataset

# Load the dataset
dataset = load_dataset("sungyub/eurus-2-code-verl")

# Access an example
example = dataset['train'][0]
print(example['prompt'][0]['content'])  # Coding problem
print(example['reward_model']['ground_truth'])  # Test cases
print(example['data_source'])  # Source dataset

# Stream the dataset for memory efficiency
dataset = load_dataset("sungyub/eurus-2-code-verl", streaming=True)
for example in dataset['train']:
    # Process examples one at a time
    pass
```

## Statistics

- **Total examples**: 25,276
- **Format**: 1 Parquet file with Git LFS
- **File size**: ~1.54 GB
- **File**: train-00000-of-00001.parquet
- **Filter rate**: 5.3% of total Eurus-2 dataset

## Source Datasets

The problems are sourced from multiple high-quality competitive programming datasets:
- **codecontests**: CodeContests problems (9,639 problems)
- **taco**: Text-Assisted Coding with Objectives (9,579 problems)
- **apps**: Automated Programming Progress Standard (3,462 problems)
- **codeforces**: Codeforces problems (2,596 problems)

## Problem Types

The dataset covers a wide range of programming challenges including:
- Algorithm design and implementation
- Data structures
- Dynamic programming
- Graph algorithms
- String processing
- Mathematical problems
- And more...

## File Structure

The dataset is contained in a single parquet file:
- File name: `train-00000-of-00001.parquet`
- Contains all 25,276 examples
- HuggingFace datasets library automatically handles file loading

## Conversion

The dataset was converted using a streaming approach:

```bash
# Install dependencies
pip install datasets pyarrow

# Run conversion
python convert_to_verl.py

# Features:
# - Streaming processing for memory efficiency
# - ParquetWriter for efficient output
# - Progress tracking and resume capability
# - Filters only code problems (ability='code')
```

## Use Cases

This dataset is ideal for:
- **Reinforcement Learning**: Training code generation models with RL
- **Fine-tuning**: Improving competitive programming capabilities
- **Code Generation**: Training models to solve algorithmic problems
- **Dataset Merging**: Compatible with other VERL code datasets (e.g., skywork-or1-code-verl)

## Technical Details

### Conversion Process
1. Loaded source dataset from HuggingFace in streaming mode
2. Filtered examples where ability='code'
3. Removed system prompts for compatibility (2025-10-27)
4. Output to single parquet file
5. Total conversion time: ~2.4 minutes
6. Filter rate: 5.3% (25,276 code problems from 480,537 total)

### VERL Format Benefits
- **Standardized structure**: Consistent across all VERL datasets
- **Rich metadata**: Includes source and split information
- **Chat template**: Ready for instruction-tuned models
- **Reward model integration**: Test cases for RL training
- **Dataset compatibility**: Works seamlessly with other VERL code datasets

## Original System Prompt (Removed)

The original dataset included a structured reasoning system prompt with the following actions:
- **[ASSESS]**: Evaluate the current state
- **[ADVANCE]**: Take a concrete step forward
- **[VERIFY]**: Check validity of steps
- **[SIMPLIFY]**: Break down complex parts
- **[SYNTHESIZE]**: Combine insights
- **[PIVOT]**: Change approach if needed
- **[OUTPUT]**: Present the final answer

This prompt has been removed from all examples to ensure compatibility with other code datasets that use simple user-only prompts. Users who wish to use structured reasoning can add their own system prompts at training time.

## Additional Information

For more information about VERL format, see the [VERL documentation](https://verl.readthedocs.io/en/latest/preparation/prepare_data.html).

## Citation

If you use this dataset, please cite the original Eurus-2-RL-Data:

```bibtex
@misc{eurus-2-rl-data,
  title={Eurus-2-RL-Data},
  author={PRIME-RL},
  year={2024},
  publisher={HuggingFace},
  url={https://huggingface.co/datasets/PRIME-RL/Eurus-2-RL-Data}
}
```

## Changelog

### 2025-10-27 - System Prompt Removal
- **Removed system prompts** from all 25,276 examples
- **Improved compatibility** with other VERL code datasets (e.g., skywork-or1-code-verl)
- Prompt structure now: `[{"role": "user", "content": "..."}]` instead of `[{"role": "system", ...}, {"role": "user", ...}]`
- All other fields (data_source, ability, reward_model, extra_info) preserved
- Original system prompt content documented above for reference
- File size remains ~1.54GB

### 2025-10-14 - Initial Release
- Filtered and converted 25,276 code problems from Eurus-2-RL-Data
- Single file for efficient loading
- Preserved original source information and metadata
- Included structured reasoning system prompt
- Total size: 1.54GB