deepmath-103k-verl / README.md
sungyub's picture
Update README with unified format (v3.0)
32653c5 verified
metadata
license: mit
language:
  - en
tags:
  - math
  - reasoning
  - verl
  - reinforcement-learning
  - math-reasoning
size_categories:
  - 10K<n<100K
task_categories:
  - text-generation
  - question-answering
dataset_info:
  features:
    - name: data_source
      dtype: string
    - name: prompt
      list:
        - name: role
          dtype: string
        - name: content
          dtype: string
    - name: ability
      dtype: string
    - name: reward_model
      struct:
        - name: style
          dtype: string
        - name: ground_truth
          dtype: string
    - name: extra_info
      struct:
        - name: index
          dtype: int64
        - name: original_dataset
          dtype: string
        - name: split
          dtype: string
  splits:
    - name: train
      num_bytes: 0
      num_examples: 95496
  download_size: 0
  dataset_size: 11000000

DeepMath-103K VERL

Dataset Size Format License

πŸ“Š Dataset Summary

This dataset contains 95,496 mathematical reasoning problems in VERL format, processed from zwhe99/DeepMath-103K.

Key Features:

  • 95,496 high-quality math problems
  • Converted to VERL format for reward modeling
  • Verified ground truth answers
  • Ready for reinforcement learning training

πŸ”— Source Dataset

Original Repository

Dataset Description

DeepMath-103K is a curated collection of 103,000 mathematical problems covering various difficulty levels and mathematical domains. The dataset emphasizes step-by-step reasoning and verification.


πŸ”„ Preprocessing Pipeline

This dataset has been preprocessed and converted to the VERL (Verification and Reinforcement Learning) format for use in mathematical reasoning tasks with reward modeling.

Cleaning Methodology

Standard Processing:

  • URL filtering (samples containing URLs removed)
  • Format normalization to VERL schema
  • Basic text cleaning and validation

Deduplication

Intra-dataset Deduplication:

  • Method: SHA-256 hash-based with text normalization
  • Before deduplication: 103,000 samples
  • After deduplication: 95,496 samples
  • Reduction: 7.3%

Inter-dataset Deduplication (v3.0):

  • Priority level: 4
  • Cross-dataset duplicates removed: 5,000

πŸ’‘ Preprocessing Examples

Example 1: Standard Processing (No Cleaning Preset)

Before Cleaning:

Calculate the derivative of $f(x) = x^2 + 3x + 2$ with respect to $x$.

After Cleaning:

Calculate the derivative of $f(x) = x^2 + 3x + 2$ with respect to $x$.

Changes Applied:

  • βœ“ Format normalization to VERL schema
  • βœ“ URL filtering (samples with URLs removed)
  • βœ“ Basic text validation
  • βœ“ No artifact removal applied

πŸ“ VERL Schema

This dataset follows the standardized VERL (Verification and Reinforcement Learning) format:

{
  "data_source": "openai/gsm8k",
  "prompt": [
    {
      "content": "Calculate the sum of all odd numbers from 1 to 99.",
      "role": "user"
    }
  ],
  "ability": "math",
  "reward_model": {
    "style": "rule",
    "ground_truth": "\boxed{2500}",
    "hash": "sha256:abc123..."
  },
  "extra_info": {
    "split": "train"
  }
}

Field Descriptions

Field Type Description
data_source string Original dataset identifier (e.g., openai/gsm8k, numina_aime)
prompt list[dict] User query in chat format with role and content
ability string Task type (always "math" for this dataset)
reward_model.style string Reward computation method ("rule" for rule-based verification)
reward_model.ground_truth string Expected answer for verification (often in \boxed{} format)
reward_model.hash string SHA-256 hash of prompt content for deduplication
extra_info.split string Original split identifier ("train", "test", etc.)

πŸ“ˆ Dataset Statistics

Sample Distribution

  • Total Samples: 95,496
  • Dataset Size: 10.5 MB
  • Average Problem Length: N/A characters

Data Sources

Distribution of problems by original data source:

Source Count Percentage
Mixed Sources 95,496 100%

Note: Detailed source distribution statistics will be added in future updates.


πŸš€ Usage

Loading the Dataset

from datasets import load_dataset

# Load the full dataset
dataset = load_dataset("sungyub/deepmath-103k-verl")

# Load with streaming (recommended for large datasets)
dataset = load_dataset("sungyub/deepmath-103k-verl", streaming=True)

# Preview first few examples
for example in dataset['train'].take(5):
    print(example['prompt'][0]['content'])  # User question
    print(example['reward_model']['ground_truth'])  # Answer
    print("---")

Using with VERL

from datatrove.utils.reward_score import compute_score

# Compute reward score for a generated solution
score = compute_score(
    data_source=example['data_source'],
    solution_str=generated_solution,
    ground_truth=example['reward_model']['ground_truth'],
    format_type="auto"  # Auto-detect XML or GPT OSS format
)

print(f"Reward score: {score}")

Integration with DataTrove

from datatrove.pipeline.readers import ParquetReader
from datatrove.pipeline.filters import LambdaFilter
from datatrove.executor import LocalPipelineExecutor

pipeline = [
    ParquetReader("sungyub/deepmath-103k-verl", text_key="prompt"),
    LambdaFilter(lambda doc: len(doc.text) > 100),  # Filter short problems
    # Add more processing steps...
]

executor = LocalPipelineExecutor(pipeline=pipeline, tasks=4)
executor.run()

πŸ“š Citation

Original Dataset

@article{he2025deepmath,
  title = {DeepMath: Deep Learning for Mathematical Reasoning},
  author = {He, Zhenwen and others},
  journal = {arXiv preprint arXiv:2504.11456},
  year = {2025},
  url = {https://arxiv.org/abs/2504.11456}
}

This Processed Version

@dataset{sungyub_math_verl_deepmath-103k-verl,
  author = {Sungyub Kim},
  title = DeepMath-103K VERL,
  year = {2025},
  publisher = {Hugging Face},
  howpublished = {\url{https://huggingface.co/datasets/sungyub/deepmath-103k-verl}}
}

βš–οΈ License

  • This processed dataset: mit
  • Original dataset: MIT

πŸ™ Acknowledgments

This dataset was processed using the DataTrove library.

Credits:

  • Original dataset authors: Zhenwen He et al.
  • Processing and VERL conversion: Sungyub Kim
  • MathDatasetCleaner implementation: DataTrove contributors

Special thanks to: Zhenwen He and collaborators for the DeepMath dataset


πŸ“ Version History

v1.0.0 (Initial Release)

  • Processed 95,496 samples from zwhe99/DeepMath-103K
  • Converted to VERL format
  • Standard processing applied
  • Ready for reinforcement learning training

πŸ”— Related Resources


Questions or issues? Open an issue on the DataTrove GitHub repository