ReFusion-8B-CJ-GRPO
ReFusion 8B trained with CJ-GRPO (Consistency-Justified GRPO). Achieves 83.9% nonzero rate / 0.390 average reward on 124 test tasks (+23.4pp over SFT). Per-step trajectory consistency with mu=1.
Paper
Concentrate or Collapse: When Reinforcement Learning Meets Diffusion Language Models for Web Planning
- Author: Muhammad Enrizky Brillian
- Institution: University of Toronto Scarborough
- Code: https://github.com/billy-enrizky/openbrowser-ai
Training Details
- Dataset: FormFactory (992 train / 124 val / 124 test tasks, 25 form types, 8 domains)
- Infrastructure: NVIDIA L40S (ReFusion) / A10G (FS-DFM) on Modal.com
- Framework: PyTorch + PEFT (LoRA/QLoRA)
- Training prompts: 50 (sequence-level), G=4 rollouts per prompt
Citation
@article{brillian2026flowgrpo,
title={Concentrate or Collapse: When Reinforcement Learning Meets Diffusion Language Models for Web Planning},
author={Brillian, Muhammad Enrizky},
year={2026}
}