Open-RL / README.md
saurabh-gnirut's picture
using consistent subdomain key
cef3b89 verified
metadata
license: mit
language:
  - en
tags:
  - chemistry
  - physics
  - math
  - biology
  - science
pretty_name: open-rl
size_categories:
  - n<1K
task_categories:
  - question-answering

Open-RL

License: MIT Turing


Dataset Summary

This dataset contains self-contained, verifiable, and unambiguous STEM reasoning problems across Physics, Mathematics, Biology, and Chemistry.

Each problem:

  • Requires multi-step reasoning
  • Involves symbolic manipulation and/or numerical computation
  • Has a deterministic, objectively verifiable final answer

The problems were evaluated against contemporary large language models. Observed pass rates indicate that the tasks are non-trivial yet solvable, placing them within reach of advanced models while still exposing meaningful reasoning gaps.

This makes the dataset particularly suitable for:

  • Reinforcement learning (RL) fine-tuning
  • Reward modeling
  • Outcome-supervised training
  • Verifiable reasoning benchmarks

Dataset Structure

Field Type Description
conversation_id string Unique identifier for each QA pair.
domain string Physics, Math, Chemistry, Biology.
sub_domain string Specific discipline.
question string STEM problem statement (LaTeX supported).
answer string Deterministic ground-truth solution.

Example

{
  "conversation_id": "217998",
  "domain": "Physics",
  "sub_domain": "Astrophysics",
  "question": "Consider a Navarro–Frenk–White (NFW) dark matter halo profile where...",
  "answer": "\( \frac{4GM_{0}}{r_{0}} + \frac{16\pi Gk}{r_{0}}\left[ \ln\left(\frac{r_{0}}{r_{s}}\right) + 0.31 \right] \)"
}

Verifiability and Automatic Grading

A core design principle of this dataset is objective verifiability.

Each problem is constructed such that:

  • The final answer is deterministic
  • Correctness can be evaluated programmatically
  • No subjective interpretation is required
  • There is a clear separation between reasoning steps and final outcome

Answer Types

The dataset includes answers that are:

  • Closed-form symbolic expressions
  • Numerical scalars
  • Algebraic identities
  • Simplified analytic forms
  • Canonical LaTeX representations

Because answers are deterministic, evaluation can be performed via:

  • Exact string matching (after normalization)
  • Symbolic equivalence checking (e.g., SymPy)
  • Numerical tolerance comparison
  • Unit consistency validation (where applicable)

Data Quality Assurance Process

To ensure scientific validity of the answer, all tasks are prepared and reviewed twice by PhD experts.

Key quality rubrics include:

  • Prompt and answer accuracy
  • Clarity of prompt and underlying reasoning
  • Expert-verified model breaking cases due to model’s incorrect reasoning process
  • Google-proof originality validation.

Reinforcement Learning and Outcome Supervision

This dataset is designed to support outcome-based reinforcement learning for reasoning models.

In contrast to preference-based RL (RLHF), which relies on subjective ranking signals, this dataset enables:

  • Outcome-supervised reinforcement learning (OSRL)
  • Deterministic reward assignment
  • Binary or graded correctness rewards
  • Scalable automated evaluation

Example RL Setup

Given:

  • Prompt: question
  • Model output: predicted final answer

Reward can be computed as:

  • +1 if the final answer matches ground truth
  • 0 or -1 otherwise
  • Optional partial credit via symbolic or numerical closeness

This allows:

  • Policy gradient methods (e.g., PPO)
  • Direct optimization against correctness signals
  • Reward model bootstrapping
  • Iterative self-improvement pipelines

Calibration Regime

The problems were stress-tested against advanced language models and found to be:

  • Not trivially solved
  • Not universally failed
  • Within the capability frontier of modern LLMs

This places them in a learning-efficient regime:

  • Hard enough to produce gradient signal
  • Solvable enough to avoid reward sparsity
  • Suitable for curriculum-style training

Future Directions: NuRL and Structured Nudging

We plan to extend this dataset with additional problem sets and a structured "nudge" augmentation layer inspired by the paper "Nudging the Boundaries of LLM Reasoning".

Motivation

Standard online RL algorithms (e.g., GRPO-style approaches) can only learn from problems where the model occasionally produces correct rollouts. For sufficiently difficult problems with a 0% pass rate, no reward signal is generated, and therefore no gradient updates occur. As a result, such problems cannot contribute to expanding the model’s reasoning frontier.

NuRL-Style Nudging

To address this limitation, future versions of this dataset will include:

  • Abstract, high-level hints ("nudges")
  • Hints generated conditionally using the gold answer
  • Carefully designed cues that reduce problem difficulty without revealing the solution

Under a NuRL-style training pipeline:

  1. Rollouts are first generated without hints.
  2. If pass rate > 0%, standard RL proceeds.
  3. If pass rate = 0%, a structured hint is injected.
  4. A new batch of trajectories is generated with the hint.

This enables:

  • Previously unsolvable samples to produce non-zero rewards
  • Learning signal from frontier-level problems
  • Expansion of the model’s upper reasoning bound

Design Principles for Effective Nudges

Planned nudges will follow empirical findings from prior work:

  • Hints should be abstract and knowledge-oriented, not answer-revealing
  • Hints should preserve distributional alignment with base policy reasoning
  • Hints should be injected only when necessary
  • Nudges are most effective after base RL convergence

This evolution positions the dataset not only as a verifiable benchmark, but as a controlled testbed for upper-bound expansion in reinforcement learning for reasoning models.


Citation

@dataset{turing_2026_open_rl,
  title        = {Open-RL },
  author       = {Saurabh Patil, Anshuman Lall, Marko Pavlovic , Chinmayee Shukla, Seetesh Pande, Tejass Mohan Ukarde , Amanda Gollo Bertollo, Mahesh Joshi, Kihwan Han},
  year         = {2026},
  url          = {https://huggingface.co/datasets/TuringEnterprises/Open-RL}
}