Datasets:
dataset_info:
features:
- name: data_source
dtype: string
- name: prompt
list:
- name: role
dtype: string
- name: content
dtype: string
- name: ability
dtype: string
- name: reward_model
struct:
- name: style
dtype: string
- name: ground_truth
dtype: string
- name: extra_info
struct:
- name: index
dtype: int64
splits:
- name: train
num_bytes: 3218296953
num_examples: 25276
download_size: 1652135331
dataset_size: 3218296953
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
license: mit
task_categories:
- reinforcement-learning
- text-generation
tags:
- code
- reasoning
- rlhf
- verl
Eurus-2-Code-RL (VERL Format)
This dataset contains 25,276 competitive programming problems from the Eurus-2-RL-Data dataset, filtered and converted to VERL format for reinforcement learning training workflows.
Source: PRIME-RL/Eurus-2-RL-Data
License: MIT
Note (Updated 2025-10-27): System prompts have been removed from all examples for better compatibility with other code datasets. The dataset now contains only user messages with the coding problems. See changelog for details.
Dataset Description
Eurus-2-Code-RL is a curated collection of competitive programming problems specifically designed for training language models using reinforcement learning. The problems are sourced from various high-quality coding challenge platforms and include:
- CodeContests problems
- TACO (Text-Assisted Coding with Objectives) problems
- APPS (Automated Programming Progress Standard) problems
- Codeforces problems
Dataset Structure
The dataset follows the VERL format with the following fields:
data_source(string): Original source identifier (e.g., "taco", "codecontests", "apps", "codeforces")prompt(list): Chat template format with role/content structure- User message with the coding problem
ability(string): Task category ("code")reward_model(dict): Evaluation informationstyle: Evaluation method ("rule" for test-based evaluation)ground_truth: Test cases for evaluation
extra_info(dict): Additional metadatasplit: Data split ("train" or "dummy")index: Example index
Data Quality
High-Quality Problems:
- ✅ Diverse sources - Problems from competitive programming platforms
- ✅ RL-focused - Specifically designed for reinforcement learning training
- ✅ Verified solutions - Ground truth test cases for reward model evaluation
- ✅ Compatible format - Matches structure of other VERL code datasets
Sample Problem
{
"data_source": "taco",
"prompt": [
{
"role": "user",
"content": "One tradition of ACM-ICPC contests is that a team gets a balloon for every solved problem. We assume that the submission time doesn't matter and teams are sorted only by the number of balloons they have. It means that one's place is equal to the number of teams with more balloons, increased by 1...\n\nWrite Python code to solve the problem. Present the code in \n```python\nYour code\n```\nat the end."
}
],
"ability": "code",
"reward_model": {
"style": "rule",
"ground_truth": "{\"inputs\": [...], \"outputs\": [...]}"
},
"extra_info": {
"split": "train",
"index": 0
}
}
Usage
from datasets import load_dataset
# Load the dataset
dataset = load_dataset("sungyub/eurus-2-code-verl")
# Access an example
example = dataset['train'][0]
print(example['prompt'][0]['content']) # Coding problem
print(example['reward_model']['ground_truth']) # Test cases
print(example['data_source']) # Source dataset
# Stream the dataset for memory efficiency
dataset = load_dataset("sungyub/eurus-2-code-verl", streaming=True)
for example in dataset['train']:
# Process examples one at a time
pass
Statistics
- Total examples: 25,276
- Format: 1 Parquet file with Git LFS
- File size: ~1.54 GB
- File: train-00000-of-00001.parquet
- Filter rate: 5.3% of total Eurus-2 dataset
Source Datasets
The problems are sourced from multiple high-quality competitive programming datasets:
- codecontests: CodeContests problems (9,639 problems)
- taco: Text-Assisted Coding with Objectives (9,579 problems)
- apps: Automated Programming Progress Standard (3,462 problems)
- codeforces: Codeforces problems (2,596 problems)
Problem Types
The dataset covers a wide range of programming challenges including:
- Algorithm design and implementation
- Data structures
- Dynamic programming
- Graph algorithms
- String processing
- Mathematical problems
- And more...
File Structure
The dataset is contained in a single parquet file:
- File name:
train-00000-of-00001.parquet - Contains all 25,276 examples
- HuggingFace datasets library automatically handles file loading
Conversion
The dataset was converted using a streaming approach:
# Install dependencies
pip install datasets pyarrow
# Run conversion
python convert_to_verl.py
# Features:
# - Streaming processing for memory efficiency
# - ParquetWriter for efficient output
# - Progress tracking and resume capability
# - Filters only code problems (ability='code')
Use Cases
This dataset is ideal for:
- Reinforcement Learning: Training code generation models with RL
- Fine-tuning: Improving competitive programming capabilities
- Code Generation: Training models to solve algorithmic problems
- Dataset Merging: Compatible with other VERL code datasets (e.g., skywork-or1-code-verl)
Technical Details
Conversion Process
- Loaded source dataset from HuggingFace in streaming mode
- Filtered examples where ability='code'
- Removed system prompts for compatibility (2025-10-27)
- Output to single parquet file
- Total conversion time: ~2.4 minutes
- Filter rate: 5.3% (25,276 code problems from 480,537 total)
VERL Format Benefits
- Standardized structure: Consistent across all VERL datasets
- Rich metadata: Includes source and split information
- Chat template: Ready for instruction-tuned models
- Reward model integration: Test cases for RL training
- Dataset compatibility: Works seamlessly with other VERL code datasets
Original System Prompt (Removed)
The original dataset included a structured reasoning system prompt with the following actions:
- [ASSESS]: Evaluate the current state
- [ADVANCE]: Take a concrete step forward
- [VERIFY]: Check validity of steps
- [SIMPLIFY]: Break down complex parts
- [SYNTHESIZE]: Combine insights
- [PIVOT]: Change approach if needed
- [OUTPUT]: Present the final answer
This prompt has been removed from all examples to ensure compatibility with other code datasets that use simple user-only prompts. Users who wish to use structured reasoning can add their own system prompts at training time.
Additional Information
For more information about VERL format, see the VERL documentation.
Citation
If you use this dataset, please cite the original Eurus-2-RL-Data:
@misc{eurus-2-rl-data,
title={Eurus-2-RL-Data},
author={PRIME-RL},
year={2024},
publisher={HuggingFace},
url={https://huggingface.co/datasets/PRIME-RL/Eurus-2-RL-Data}
}
Changelog
2025-10-27 - System Prompt Removal
- Removed system prompts from all 25,276 examples
- Improved compatibility with other VERL code datasets (e.g., skywork-or1-code-verl)
- Prompt structure now:
[{"role": "user", "content": "..."}]instead of[{"role": "system", ...}, {"role": "user", ...}] - All other fields (data_source, ability, reward_model, extra_info) preserved
- Original system prompt content documented above for reference
- File size remains ~1.54GB
2025-10-14 - Initial Release
- Filtered and converted 25,276 code problems from Eurus-2-RL-Data
- Single file for efficient loading
- Preserved original source information and metadata
- Included structured reasoning system prompt
- Total size: 1.54GB