Papers
arxiv:2604.05963

QiMeng-PRepair: Precise Code Repair via Edit-Aware Reward Optimization

Published on Apr 7
· Submitted by
Changxin Ke
on Apr 8
Authors:
,
,
,
,
,
,
,
,
,
,
,

Abstract

PRepair framework reduces over-editing in program repair by combining controlled bug injection and edit-aware policy optimization to maximize correct code reuse while minimizing unnecessary modifications.

AI-generated summary

Large Language Models (LLMs) achieve strong program repair performance but often suffer from over-editing, where excessive modifications overwrite correct code and hinder bug localization. We systematically quantify its impact and introduce precise repair task, which maximizes reuse of correct code while fixing only buggy parts. Building on this insight, we propose PRepair, a framework that mitigates over-editing and improves repair accuracy. PRepair has two components: Self-Breaking, which generates diverse buggy programs via controlled bug injection and min-max sampling, and Self-Repairing, which trains models with Edit-Aware Group Relative Policy Optimization (EA-GRPO) using an edit-aware reward to encourage minimal yet correct edits. Experiments show that PRepair improves repair precision by up to 31.4% under fix_1@1, a metric that jointly considers repair correctness and extent, and significantly increases decoding throughput when combined with speculative editing, demonstrating its potential for precise and practical code repair.

Community

Paper author Paper submitter

How to process over-editing?

Sign up or log in to comment

Get this paper in your agent:

hf papers read 2604.05963
Don't have the latest CLI?
curl -LsSf https://hf.co/cli/install.sh | bash

Models citing this paper 2

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2604.05963 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2604.05963 in a Space README.md to link it from this page.

Collections including this paper 1