Papers
arxiv:2605.06638

Can RL Teach Long-Horizon Reasoning to LLMs? Expressiveness Is Key

Published on May 7
· Submitted by
Tianle Wang
on May 8
Authors:
,
,
,
,
,
,

Abstract

ScaleLogic demonstrates that reinforcement learning training compute scales as a power law with reasoning depth, with scaling exponents increasing monotonically with logical expressiveness across multiple reasoning tasks.

AI-generated summary

Reinforcement learning (RL) has been applied to improve large language model (LLM) reasoning, yet the systematic study of how training scales with task difficulty has been hampered by the lack of controlled, scalable environments. We introduce ScaleLogic, a synthetic logical reasoning framework that offers independent control over two axes of difficulty: the depth of the required proof planning (i.e., the horizon) and the expressiveness of the underlying logic. Our proposed framework supports a wide range of logics: from simple implication-only logic ("if-then") towards more expressive first-order reasoning with conjunction ("and"), disjunction ("or"), negation ("not"), and universal quantification ("for all"). Using this framework, we show that the RL training compute T follows a power law with respect to reasoning depth D (T propto D^γ, R^{2} > 0.99), and that the scaling exponent γ increases monotonically with logical expressiveness, from 1.04 to 2.60. On downstream mathematics and general reasoning benchmarks, more expressive training settings yield both larger performance gains (up to +10.66 points) and more compute-efficient transfer compared to less expressive settings, demonstrating that what a model is trained on, not just how much it is trained, shapes downstream transfer. We further show that the power-law relationship holds across multiple RL methods, and curriculum-based training substantially improves scaling efficiency.

Community

Paper submitter

the thing that sticks with me is the reported power law T ∝ D^γ and the fact that γ grows as you move from simple implication to richer first-order logic. that hints expressiveness, not just horizon, controls learning difficulty in a pretty dramatic way. would love an ablation where horizon is fixed and you vary expressiveness to confirm it's expressiveness driving γ, not just more or different proofs per task. btw the arxivlens breakdown helped me parse the method details, especially the backward proof expansion and the verifiable multiple-choice framing. overall, the transfer gains with expressive synthetic data look nice, but i want to see robustness under distribution shifts.

Sign up or log in to comment

Get this paper in your agent:

hf papers read 2605.06638
Don't have the latest CLI?
curl -LsSf https://hf.co/cli/install.sh | bash

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2605.06638 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2605.06638 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2605.06638 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.