Introduction to TraDo

Paper | Code | Blog

We introduce RLAnything, a reinforcement learning framework forges environment, policy and reward model in a completely dynamic system to enhance the training signals and improve the whole system.

  • Integrated Feedback for Policy: The policy is trained with integrated outcome and step-wise signals from reward model.
  • Consistency Feedback for Reward Model: The Reward model is jointly optimized by consistency feedback, further improves policy training.
  • Critic Feedback for Environment: Our theory-motivated automatic environment adaptation improves training for both the reward and policy models by leveraging critic feedback from each.

Citation

@article{wang2026rlanything,
  title={RLAnything: Forge Environment, Policy, and Reward Model in Completely Dynamic RL System},
  author={Wang, Yinjie and Xie, Tianbao and Shen, Ke and Wang, Mengdi and Yang, Ling},
  journal={arXiv preprint arXiv:2602.02488},
  year={2026}
}
Downloads last month
3
Safetensors
Model size
770k params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Gen-Verse/RLAnything-OS-Reward-8B

Quantizations
2 models

Collection including Gen-Verse/RLAnything-OS-Reward-8B

Paper for Gen-Verse/RLAnything-OS-Reward-8B