| --- |
| license: mit |
| --- |
| |
|
|
| # Introduction to TraDo |
|
|
| [Paper](https://arxiv.org/abs/2602.02488) | [Code](https://github.com/Gen-Verse/Open-AgentRL) | [Blog](https://yinjjiew.github.io/projects/rlanything/) |
|
|
| We introduce **RLAnything**, a reinforcement learning framework forges environment, policy and reward model in a completely dynamic system to enhance the training signals and improve the whole system. |
|
|
| * **Integrated Feedback for Policy:** The policy is trained with integrated outcome and step-wise signals from reward model. |
| * **Consistency Feedback for Reward Model:** The Reward model is jointly optimized by consistency feedback, further improves policy training. |
| * **Critic Feedback for Environment:** Our theory-motivated automatic environment adaptation improves training for both the reward and policy models by leveraging critic feedback from each. |
|
|
|
|
|
|
| <p align="center"> |
| <img src="https://github.com/yinjjiew/Data/raw/main/rlanything/rlanythingoverview.png" width="100%"/> |
| </p> |
|
|
|
|
| <p align="center"> |
| <img src="https://github.com/yinjjiew/Data/raw/main/rlanything/rlanythingmaintable.png" width="100%"/> |
| </p> |
|
|
|
|
|
|
|
|
| # Citation |
|
|
| ``` |
| @article{wang2026rlanything, |
| title={RLAnything: Forge Environment, Policy, and Reward Model in Completely Dynamic RL System}, |
| author={Wang, Yinjie and Xie, Tianbao and Shen, Ke and Wang, Mengdi and Yang, Ling}, |
| journal={arXiv preprint arXiv:2602.02488}, |
| year={2026} |
| } |
| ``` |
|
|
|
|
|
|