metadata
license: apache-2.0
language:
- en
base_model:
- Qwen/Qwen2.5-Math-7B
library_name: transformers
Model Card for RLFR-Qwen2.5-Math-7B
RLFR-Qwen2.5-Math-7B is trained from Qwen2.5-Math-7B with the RLFR framework, which introduces the flow reward derived from latent space, extending RLVR with latent reward utilization.
Model Details
Key Features
- 💡 Sound Flow Environment: RLFR establishes the flow fields of policy latents from either off-policy high-quality data and on-policy rejection sampling data, and the velocity deviations of policy latents within it are quantified to serve as a reward signal. We first demonstrate a well-established flow field can be a sound reward environment and highlight the much underexplored yet highly expressive latent space.
- 🛠️ Training Details
- Underscoring the timesteps priority in flow reward yielding, and recommend for larger timesteps with less noisy, given by the accuracy of velocity prediction.
- Metrics of online rejection-sampling are flexible to direct the constitution of reference flow for reward calculation.
- 📈 Reward Behavior Flow rewards enable arbitrary expert off-policy data as reference for constituting reward signal. Additionally, flow rewards rely on efficient context dependence that natively compressed in the latent space rather than individual denotation in the token space for context comprehending.
Model Description
- Trained from model:Qwen2.5-Math-7B
- Trained from dataset: RLFR-Dataset-LM
Usage
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "JingHaoZ/RLFR-Qwen2.5-Math-7B"
device = "cuda" # the device to load the model onto
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)
prompt = "Find the value of $x$ that satisfies the equation $4x+5 = 6x+7$."
messages = [
{"role": "system", "content": "Please reason step by step, and put your final answer within \\boxed{}."},
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(device)
generated_ids = model.generate(
**model_inputs,
max_new_tokens=512
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
Citation
If you find our work helpful, feel free to give us a citation.
@article{zhang2025rlfr,
title={RLFR: Extending Reinforcement Learning for LLMs with Flow Environment},
author={Zhang, Jinghao and Zheng, Naishan and Li, Ruilin and Cheng, Dongzhou and Liang, Zheming and Zhao, Feng and Wang, Jiaqi},
journal={arXiv preprint arXiv:2510.10201},
year={2025}
}
