sungyub's picture
Update README with unified format (v3.0)
dafc895 verified
metadata
license: mit
language:
  - en
tags:
  - math
  - reasoning
  - verl
  - reinforcement-learning
  - math-reasoning
size_categories:
  - 10K<n<100K
task_categories:
  - text-generation
  - question-answering
dataset_info:
  features:
    - name: data_source
      dtype: string
    - name: prompt
      list:
        - name: role
          dtype: string
        - name: content
          dtype: string
    - name: ability
      dtype: string
    - name: reward_model
      struct:
        - name: style
          dtype: string
        - name: ground_truth
          dtype: string
    - name: extra_info
      struct:
        - name: index
          dtype: int64
        - name: original_dataset
          dtype: string
        - name: split
          dtype: string
  splits:
    - name: train
      num_bytes: 0
      num_examples: 35789
  download_size: 0
  dataset_size: 5000000

DeepScaleR-Preview VERL

Dataset Size Format License

πŸ“Š Dataset Summary

This dataset contains 35,789 mathematical reasoning problems in VERL format, processed from agentica-org/DeepScaleR-Preview-Dataset.

Key Features:

  • 35,789 high-quality math problems
  • Converted to VERL format for reward modeling
  • Verified ground truth answers
  • Ready for reinforcement learning training

πŸ”— Source Dataset

Original Repository

Dataset Description

DeepScaleR-Preview-Dataset contains approximately 40,000 unique mathematics problem-answer pairs compiled from AIME (American Invitational Mathematics Examination, 1984-2023), AMC (American Mathematics Competition, pre-2023), Omni-MATH dataset, and Still dataset. The dataset was used to train the DeepScaleR-1.5B-Preview model using distributed reinforcement learning.


πŸ”„ Preprocessing Pipeline

This dataset has been preprocessed and converted to the VERL (Verification and Reinforcement Learning) format for use in mathematical reasoning tasks with reward modeling.

Cleaning Methodology

Standard Processing:

  • URL filtering (samples containing URLs removed)
  • Format normalization to VERL schema
  • Basic text cleaning and validation

Deduplication

Intra-dataset Deduplication:

  • Method: SHA-256 hash-based with text normalization
  • Before deduplication: 40,000 samples
  • After deduplication: 35,789 samples
  • Reduction: 10.5%

Inter-dataset Deduplication (v3.0):

  • Priority level: 1
  • Cross-dataset duplicates removed: 2,000

πŸ’‘ Preprocessing Examples

Example 1: Standard Processing (No Cleaning Preset)

Before Cleaning:

Calculate the derivative of $f(x) = x^2 + 3x + 2$ with respect to $x$.

After Cleaning:

Calculate the derivative of $f(x) = x^2 + 3x + 2$ with respect to $x$.

Changes Applied:

  • βœ“ Format normalization to VERL schema
  • βœ“ URL filtering (samples with URLs removed)
  • βœ“ Basic text validation
  • βœ“ No artifact removal applied

πŸ“ VERL Schema

This dataset follows the standardized VERL (Verification and Reinforcement Learning) format:

{
  "data_source": "openai/gsm8k",
  "prompt": [
    {
      "content": "Calculate the sum of all odd numbers from 1 to 99.",
      "role": "user"
    }
  ],
  "ability": "math",
  "reward_model": {
    "style": "rule",
    "ground_truth": "\boxed{2500}",
    "hash": "sha256:abc123..."
  },
  "extra_info": {
    "split": "train"
  }
}

Field Descriptions

Field Type Description
data_source string Original dataset identifier (e.g., openai/gsm8k, numina_aime)
prompt list[dict] User query in chat format with role and content
ability string Task type (always "math" for this dataset)
reward_model.style string Reward computation method ("rule" for rule-based verification)
reward_model.ground_truth string Expected answer for verification (often in \boxed{} format)
reward_model.hash string SHA-256 hash of prompt content for deduplication
extra_info.split string Original split identifier ("train", "test", etc.)

πŸ“ˆ Dataset Statistics

Sample Distribution

  • Total Samples: 35,789
  • Dataset Size: 4.8 MB
  • Average Problem Length: N/A characters

Data Sources

Distribution of problems by original data source:

Source Count Percentage
Mixed Sources 35,789 100%

Note: Detailed source distribution statistics will be added in future updates.


πŸš€ Usage

Loading the Dataset

from datasets import load_dataset

# Load the full dataset
dataset = load_dataset("sungyub/deepscaler-preview-verl")

# Load with streaming (recommended for large datasets)
dataset = load_dataset("sungyub/deepscaler-preview-verl", streaming=True)

# Preview first few examples
for example in dataset['train'].take(5):
    print(example['prompt'][0]['content'])  # User question
    print(example['reward_model']['ground_truth'])  # Answer
    print("---")

Using with VERL

from datatrove.utils.reward_score import compute_score

# Compute reward score for a generated solution
score = compute_score(
    data_source=example['data_source'],
    solution_str=generated_solution,
    ground_truth=example['reward_model']['ground_truth'],
    format_type="auto"  # Auto-detect XML or GPT OSS format
)

print(f"Reward score: {score}")

Integration with DataTrove

from datatrove.pipeline.readers import ParquetReader
from datatrove.pipeline.filters import LambdaFilter
from datatrove.executor import LocalPipelineExecutor

pipeline = [
    ParquetReader("sungyub/deepscaler-preview-verl", text_key="prompt"),
    LambdaFilter(lambda doc: len(doc.text) > 100),  # Filter short problems
    # Add more processing steps...
]

executor = LocalPipelineExecutor(pipeline=pipeline, tasks=4)
executor.run()

πŸ“š Citation

Original Dataset

@dataset{agentica_deepscaler_2025,
  author = {Agentica Organization},
  title = {DeepScaleR-Preview-Dataset: AIME and AMC Mathematics Collection},
  year = {2025},
  publisher = {Hugging Face},
  howpublished = {\url{https://huggingface.co/datasets/agentica-org/DeepScaleR-Preview-Dataset}}
}

This Processed Version

@dataset{sungyub_math_verl_deepscaler-preview-verl,
  author = {Sungyub Kim},
  title = DeepScaleR-Preview VERL,
  year = {2025},
  publisher = {Hugging Face},
  howpublished = {\url{https://huggingface.co/datasets/sungyub/deepscaler-preview-verl}}
}

βš–οΈ License

  • This processed dataset: mit
  • Original dataset: MIT

πŸ™ Acknowledgments

This dataset was processed using the DataTrove library.

Credits:

  • Original dataset authors: Agentica Organization
  • Processing and VERL conversion: Sungyub Kim
  • MathDatasetCleaner implementation: DataTrove contributors

Special thanks to: Agentica team for curating competition-level mathematics problems


πŸ“ Version History

v1.0.0 (Initial Release)

  • Processed 35,789 samples from agentica-org/DeepScaleR-Preview-Dataset
  • Converted to VERL format
  • Standard processing applied
  • Ready for reinforcement learning training

πŸ”— Related Resources


Questions or issues? Open an issue on the DataTrove GitHub repository