Papers
arxiv:2605.03571

PatRe: A Full-Stage Office Action and Rebuttal Generation Benchmark for Patent Examination

Published on May 5
Β· Submitted by
Mathsion Wong
on May 6
Authors:
,
,
,
,
,
,

Abstract

PatRe benchmark models the complete patent examination process as a dynamic, multi-turn interaction between examiners and applicants, revealing key performance differences among LLMs in legal reasoning and technical novelty assessment.

AI-generated summary

Patent examination is a complex, multi-stage process requiring both technical expertise and legal reasoning, increasingly challenged by rising application volumes. Prior benchmarks predominantly view patent examination as discriminative classification or static extraction, failing to capture its inherently interactive and iterative nature, similar to the peer review and rebuttal process in academic publishing. In this paper, we introduce PatRe, the first benchmark that models the full patent examination lifecycle, including Office Action generation and applicant rebuttal. PatRe comprises 480 real-world cases and supports both oracle and retrieval-simulated evaluation settings. Our benchmark reframes patent examination as a dynamic, multi-turn process of justification and response. Extensive experiments across various LLMs reveal critical insights into model performance, including differences between proprietary and open-source models, as well as task asymmetries between examiner analysis and applicant-side rebuttal. These findings highlight both the potential and current limitations of LLMs in modeling complex, real-world legal reasoning and technical novelty judgment in patent examination. We release our code and dataset to facilitate future research on patent examination modeling.

Community

Paper submitter

PatRe is the first benchmark to model the full patent examination lifecycle as an interactive, multi-turn process between examiner and applicant.
It captures real-world dynamics such as Office Action generation and rebuttal, supporting both oracle and retrieval-based evaluation settings to assess iterative legal and technical reasoning.

If you find our work interesting, we would really appreciate your support and upvote! πŸŒΏπŸš€

PatRe is open-sourced at https://github.com/AIforIP/PatRe
Project page: https://patre.wangqiyao.me/

Sign up or log in to comment

Get this paper in your agent:

hf papers read 2605.03571
Don't have the latest CLI?
curl -LsSf https://hf.co/cli/install.sh | bash

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2605.03571 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2605.03571 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2605.03571 in a Space README.md to link it from this page.

Collections including this paper 1