Papers
arxiv:2605.07153

Beyond Reasoning: Reinforcement Learning Unlocks Parametric Knowledge in LLMs

Published on May 8
· Submitted by
Wanli Yang
on May 13
Authors:
,
,
,
,
,
,
,

Abstract

Reinforcement learning improves large language model recall of parametric knowledge by redistributing probability mass toward correct answers, with gains driven primarily by reinforcing rare but learnable examples.

AI-generated summary

Reinforcement learning (RL) has achieved remarkable success in LLM reasoning, but whether it can also improve direct recall of parametric knowledge remains an open question. We study this question in a controlled zero-shot, one-hop, closed-book QA setting with no chain-of-thought, training only on binary correctness rewards and applying fact-level train-test deduplication to ensure gains reflect improved recall rather than reasoning or memorization. Across three model families and multiple factual QA benchmarks, RL yields ~27% average relative gains, surpassing both training- and inference-time baselines alike. Mechanistically, RL primarily redistributes probability mass over existing knowledge rather than acquiring new facts, moving correct answers from the low-probability tail into reliable greedy generations. Our data-attribution study reveals that the hardest examples are the most informative: those whose answers never appear in 128 pre-RL samples (only ~18% of training data) drive ~83% of the gain, since rare correct rollouts still emerge during training and get reinforced. Together, these findings broaden the role of RL beyond reasoning, repositioning it as a tool for unlocking rather than acquiring latent parametric knowledge.

Community

We usually think of RL as the go-to tool for complex reasoning (CoT) , but this paper demonstrates it is also a highly effective approach to enhance the direct recall of parametric knowledge! Tested on a strict, non-CoT closed-book QA setting , RL boosted direct factual recall by an average of ~27% relative across the Llama, Qwen, and OLMo model families.

The most striking takeaways:
1️⃣ No new facts are injected: RL simply redistributes probability mass, yanking correct answers from the low-probability tail into reliable greedy generations. To put it directly, RL fundamentally optimizes the recall of latent knowledge.
2️⃣ The unexpected contribution of 0/128 samples: Remarkably, ~83% of the performance jump is driven by training on the hardest examples, those where the correct answer never appeared in 128 pre-RL samples! As long as these rare correct rollouts emerge even occasionally during training, RL captures and powerfully reinforces them.

Ultimately, this work deepens our understanding of RL's true scope. It proves that RL isn't just an optimizer for reasoning trajectories—it provides compelling empirical evidence that LLMs truly "know more than they express," and RL is the key to narrowing that accessibility gap.

截屏2026-05-13 20.53.13

Sign up or log in to comment

Get this paper in your agent:

hf papers read 2605.07153
Don't have the latest CLI?
curl -LsSf https://hf.co/cli/install.sh | bash

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2605.07153 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2605.07153 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2605.07153 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.