File size: 2,414 Bytes
dd4f04a 8cfd0e3 dd4f04a 8cfd0e3 dd4f04a 8cfd0e3 dd4f04a 8cfd0e3 dd4f04a 8cfd0e3 dd4f04a 8cfd0e3 dd4f04a 8cfd0e3 dd4f04a 8cfd0e3 dd4f04a 8cfd0e3 dd4f04a 8cfd0e3 d2851df 8cfd0e3 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 |
---
license: mit
library_name: transformers
pipeline_tag: text-generation
---
# RLAnything: Forge Environment, Policy, and Reward Model in Completely Dynamic RL System
[Paper](https://huggingface.co/papers/2602.02488) | [Code](https://github.com/Gen-Verse/Open-AgentRL) | [Blog](https://yinjjiew.github.io/projects/rlanything/) | [Project Page](https://huggingface.co/collections/Gen-Verse/open-agentrl)
**RLAnything** is a reinforcement learning framework that dynamically forges environment, policy, and reward models through closed-loop optimization, amplifying learning signals and strengthening the overall RL system for any LLM or agentic scenarios.
Specifically, the policy is trained with integrated feedback from step-wise and outcome signals, while the reward model is jointly optimized via consistency feedback, which in turn further improves policy training. Moreover, theory-motivated automatic environment adaptation improves training for both the reward and policy models by leveraging critic feedback from each, enabling learning from experience.
## Key Features
* **Integrated Feedback for Policy:** The policy is trained with integrated outcome and step-wise signals from reward model.
* **Consistency Feedback for Reward Model:** The Reward model is jointly optimized by consistency feedback, further improves policy training.
* **Critic Feedback for Environment:** Theory-motivated automatic environment adaptation improves training for both the reward and policy models by leveraging critic feedback from each.
<p align="center">
<img src="https://github.com/yinjjiew/Data/raw/main/rlanything/rlanythingoverview.png" width="100%"/>
</p>
## Results
Empirically, each added component consistently improves the overall system, and RLAnything yields substantial gains across various representative LLM and agentic tasks, boosting Qwen3-VL-8B-Thinking by 9.1% on OSWorld and Qwen2.5-7B-Instruct by 18.7% and 11.9% on AlfWorld and LiveBench, respectively.
<p align="center">
<img src="https://github.com/yinjjiew/Data/raw/main/rlanything/rlanythingmaintable.png" width="100%"/>
</p>
## Citation
```bibtex
@article{wang2026rlanything,
title={RLAnything: Forge Environment, Policy, and Reward Model in Completely Dynamic RL System},
author={Wang, Yinjie and Xie, Tianbao and Shen, Ke and Wang, Mengdi and Yang, Ling},
journal={arXiv preprint arXiv:2602.02488},
year={2026}
}
``` |