File size: 6,703 Bytes
650f99c 0ee6e8a 3fc4ce9 650f99c 0ee6e8a 650f99c 29f272c 650f99c 0ee6e8a 548dea4 650f99c c11f9fe 650f99c 3fc4ce9 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 | ---
license: mit
language:
- en
tags:
- chemistry
- physics
- math
- biology
- science
pretty_name: open-rl
size_categories:
- n<1K
task_categories:
- question-answering
---
# Open-RL
[](https://opensource.org/licenses/MIT)
[](https://turing.com)
---
## Dataset Summary
This dataset contains **self-contained, verifiable, and unambiguous STEM reasoning problems** across Physics, Mathematics, Biology, and Chemistry.
Each problem:
* Requires multi-step reasoning
* Involves symbolic manipulation and/or numerical computation
* Has a deterministic, objectively verifiable final answer
The problems were evaluated against contemporary large language models. Observed pass rates indicate that the tasks are **non-trivial yet solvable**, placing them within reach of advanced models while still exposing meaningful reasoning gaps.
This makes the dataset particularly suitable for:
* Reinforcement learning (RL) fine-tuning
* Reward modeling
* Outcome-supervised training
* Verifiable reasoning benchmarks
---
## Dataset Structure
| Field | Type | Description |
| ----------------- | ------ | ----------------------------------------- |
| `conversation_id` | string | Unique identifier for each QA pair. |
| `domain` | string | Physics, Math, Chemistry, Biology. |
| `sub-domain` | string | Specific discipline. |
| `question` | string | STEM problem statement (LaTeX supported). |
| `answer` | string | Deterministic ground-truth solution. |
---
## Example
```json
{
"conversation_id": "217998",
"domain": "Physics",
"sub-domain": "Astrophysics",
"question": "Consider a Navarro–Frenk–White (NFW) dark matter halo profile where...",
"answer": "\( \frac{4GM_{0}}{r_{0}} + \frac{16\pi Gk}{r_{0}}\left[ \ln\left(\frac{r_{0}}{r_{s}}\right) + 0.31 \right] \)"
}
```
---
## Verifiability and Automatic Grading
A core design principle of this dataset is **objective verifiability**.
Each problem is constructed such that:
* The final answer is deterministic
* Correctness can be evaluated programmatically
* No subjective interpretation is required
* There is a clear separation between reasoning steps and final outcome
### Answer Types
The dataset includes answers that are:
* Closed-form symbolic expressions
* Numerical scalars
* Algebraic identities
* Simplified analytic forms
* Canonical LaTeX representations
Because answers are deterministic, evaluation can be performed via:
* Exact string matching (after normalization)
* Symbolic equivalence checking (e.g., SymPy)
* Numerical tolerance comparison
* Unit consistency validation (where applicable)
---
## Data Quality Assurance Process
To ensure scientific validity of the answer, all tasks are prepared and reviewed twice by PhD experts.
Key quality rubrics include:
* Prompt and answer accuracy
* Clarity of prompt and underlying reasoning
* Expert-verified model breaking cases due to model’s incorrect reasoning process
* Google-proof originality validation.
---
## Reinforcement Learning and Outcome Supervision
This dataset is designed to support **outcome-based reinforcement learning** for reasoning models.
In contrast to preference-based RL (RLHF), which relies on subjective ranking signals, this dataset enables:
* Outcome-supervised reinforcement learning (OSRL)
* Deterministic reward assignment
* Binary or graded correctness rewards
* Scalable automated evaluation
### Example RL Setup
Given:
* Prompt: `question`
* Model output: predicted final answer
Reward can be computed as:
* `+1` if the final answer matches ground truth
* `0` or `-1` otherwise
* Optional partial credit via symbolic or numerical closeness
This allows:
* Policy gradient methods (e.g., PPO)
* Direct optimization against correctness signals
* Reward model bootstrapping
* Iterative self-improvement pipelines
### Calibration Regime
The problems were stress-tested against advanced language models and found to be:
* Not trivially solved
* Not universally failed
* Within the capability frontier of modern LLMs
This places them in a **learning-efficient regime**:
* Hard enough to produce gradient signal
* Solvable enough to avoid reward sparsity
* Suitable for curriculum-style training
---
## Future Directions: NuRL and Structured Nudging
We plan to extend this dataset with additional problem sets and a structured **"nudge" augmentation layer** inspired by the paper *["Nudging the Boundaries of LLM Reasoning"](https://arxiv.org/html/2509.25666v1)*.
### Motivation
Standard online RL algorithms (e.g., GRPO-style approaches) can only learn from problems where the model occasionally produces correct rollouts. For sufficiently difficult problems with a **0% pass rate**, no reward signal is generated, and therefore no gradient updates occur. As a result, such problems cannot contribute to expanding the model’s reasoning frontier.
### NuRL-Style Nudging
To address this limitation, future versions of this dataset will include:
* Abstract, high-level **hints ("nudges")**
* Hints generated conditionally using the gold answer
* Carefully designed cues that reduce problem difficulty without revealing the solution
Under a NuRL-style training pipeline:
1. Rollouts are first generated without hints.
2. If pass rate > 0%, standard RL proceeds.
3. If pass rate = 0%, a structured hint is injected.
4. A new batch of trajectories is generated with the hint.
This enables:
* Previously unsolvable samples to produce non-zero rewards
* Learning signal from frontier-level problems
* Expansion of the model’s upper reasoning bound
### Design Principles for Effective Nudges
Planned nudges will follow empirical findings from prior work:
* Hints should be **abstract and knowledge-oriented**, not answer-revealing
* Hints should preserve distributional alignment with base policy reasoning
* Hints should be injected only when necessary
* Nudges are most effective after base RL convergence
---
This evolution positions the dataset not only as a verifiable benchmark, but as a controlled testbed for **upper-bound expansion in reinforcement learning for reasoning models**.
---
## Citation
```bibtex
@dataset{turing_2026_open_rl,
title = {Open-RL },
author = {Saurabh Patil, Anshuman Lall, Marko Pavlovic , Chinmayee Shukla, Seetesh Pande, Tejass Mohan Ukarde , Amanda Gollo Bertollo, Mahesh Joshi, Kihwan Han},
year = {2026},
url = {https://huggingface.co/datasets/TuringEnterprises/Open-RL}
}
``` |