--- license: mit library_name: transformers pipeline_tag: text-generation --- # RLAnything: Forge Environment, Policy, and Reward Model in Completely Dynamic RL System [Paper](https://huggingface.co/papers/2602.02488) | [Code](https://github.com/Gen-Verse/Open-AgentRL) | [Blog](https://yinjjiew.github.io/projects/rlanything/) | [Project Page](https://huggingface.co/collections/Gen-Verse/open-agentrl) **RLAnything** is a reinforcement learning framework that dynamically forges environment, policy, and reward models through closed-loop optimization, amplifying learning signals and strengthening the overall RL system for any LLM or agentic scenarios. Specifically, the policy is trained with integrated feedback from step-wise and outcome signals, while the reward model is jointly optimized via consistency feedback, which in turn further improves policy training. Moreover, theory-motivated automatic environment adaptation improves training for both the reward and policy models by leveraging critic feedback from each, enabling learning from experience. ## Key Features * **Integrated Feedback for Policy:** The policy is trained with integrated outcome and step-wise signals from reward model. * **Consistency Feedback for Reward Model:** The Reward model is jointly optimized by consistency feedback, further improves policy training. * **Critic Feedback for Environment:** Theory-motivated automatic environment adaptation improves training for both the reward and policy models by leveraging critic feedback from each.