Reasoning-Aware Proxy Reward Model using Process Mining

BAELAB, Pusan National University, Busan, Korea

Yongjae Lee*, Taekyhun Park* , Hyerim Bae

🌟 Github | 📥 1.5B Download | 📥 7B Download | 📄 Arxiv Paper Link |

Abstract

Recent advances in sparse reward policy gradient methods have enabled effective reinforcement learning (LR) fine-tuning for post-training language models. However, for reasoning tasks such as mathematical problem solving, binarized outcome rewards provide limited feedback on intermediate reasoning steps. While some studies have attempted to address this issue by estimating overall reasoning quality, it remains unclear whether these rewards are reliable proxies for the quality of stepwise reasoning. In this study, we consider reasoning as a structured process and propose TACReward reward model. The model can be seamlessly integrated into sparse reward frameworks without additional human annotation costs or architectural modifications. TACReward aggregates stepwise structural deviations between teachers and policy reasoning using process mining techniques, producing a scalar output reward range of $[0, 1]$. Experiments on multiple mathematical reasoning benchmarks demonstrate that integrating the TACReward into sparse reward frameworks encourages the policy model to improve the structural quality of reasoning. Consequently, this leads to consistent performance improvements over existing sparse reward frameworks.

Illustration of TACReward

Downloads last month
191
Safetensors
Model size
333k params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Thrillcrazyer/TACReward7B

Finetunes
1 model
Quantizations
1 model

Paper for Thrillcrazyer/TACReward7B