Datasets:
The dataset viewer is not available for this split.
Error code: TooBigContentError
Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
Dataset Card for PILOT-Bench
Dataset Description
PILOT-Bench (Patent InvaLidation Trial Benchmark) is a benchmark dataset designed to evaluate the structured legal reasoning capabilities of Large Language Models (LLMs) based on U.S. Patent Trial and Appeal Board (PTAB) decision documents.
- Repository: TeamLab/pilot-bench
- Paper: PILOT-Bench: A Benchmark for Legal Reasoning in the Patent Domain with IRAC-Aligned Classification Tasks
- Point of Contact: Yehoon Jang (jangyh0420@pukyong.ac.kr)
Dataset Summary
PILOT-Bench aligns PTAB appeal cases with USPTO patent data at the case level. It formalizes three classification tasks aligned with the IRAC (Issue, Rule, Application, Conclusion) framework, the standard for legal analysis. This dataset aims to measure how logically LLMs can understand and classify unstructured legal documents.
Dataset Structure
Data Instances
The data is stored in pilot-bench.tar.gz. Each instance consists of metadata and text segments partitioned into the IRAC structure using Gemini-2.5-pro.
{
"file_name": "2017002267_DECISION",
"appellant_arguments": "...",
"examiner_findings": "...",
"ptab_opinion": "...",
"issue_type": ["103"],
"board_rulings": ["37 CFR 41.50", "37 CFR 41.50(f)"],
"subdecision": "Affirmed-in-Part"
}
Data Fields
The key fields in ptab.json and the opinion_split data are as follows:
| Field Name (Key) | Type | Description |
|---|---|---|
proceedingNumber |
int |
PTAB Proceeding Number. A unique ID identifying each appeal case. |
appellant_arguments |
str |
Appellant Arguments. Legal grounds and arguments from the appellant, extracted via Gemini-2.5-pro. |
examiner_findings |
str |
Examiner Findings. Sections containing the examiner's reasons for rejection and underlying facts. |
ptab_opinion |
str |
Board Opinion. The full text of the legal judgment rendered by the PTAB. |
issue_type |
list[str] |
Legal Issue. List of statutory grounds involved (e.g., 35 U.S.C. §101, 102, 103, 112). |
board_rulings |
list[str] |
Board Authorities (Rule). Procedural provisions cited by the Board (e.g., 37 C.F.R. § 41.50). |
subdecision |
str |
Conclusion (Fine-grained). Represents 23 specific outcome types (e.g., Affirmed, Reversed). |
subdecision_coarse |
str |
Conclusion (Coarse-grained). Outcomes simplified into 6 categories for analysis convenience. |
respondentPatentNumber |
str |
The U.S. Patent Number subject to the appeal. |
decisionDate |
str |
The Date the final decision was rendered by the PTAB. |
Data Instance Example
{
"proceedingNumber": 2017002267,
"appellant_arguments": "The Appellant argues that the Examiner erred in finding...",
"examiner_findings": "The Examiner maintains the rejection of claims 1-10 under 35 U.S.C. 103...",
"ptab_opinion": "We have reviewed the arguments and find that the Examiner's position...",
"issue_type": ["103"],
"board_rulings": ["37 CFR 41.50"],
"subdecision": "Affirmed",
"file_name": "2017002267_DECISION"
}
How to use
You can easily load the dataset using the Hugging Face datasets library. Since the data is stored in a compressed format (pilot-bench.tar.gz), you should specify the file in the data_files parameter.
from datasets import load_dataset
# Load the dataset
dataset = load_dataset("Yehoon/pilot-bench", data_files="pilot-bench.tar.gz")
Dataset Creation
Curation Rationale
While existing legal domain datasets often focus on simple document classification, PILOT-Bench was built to systematically evaluate how well LLMs understand the IRAC (Issue-Rule-Application-Conclusion) framework within the highly specialized patent domain. Specifically, it was curated to separate unstructured decisions into logical units, allowing models to utilize step-by-step information.
Source Data
- Data Collection: Collected from the USPTO Open Data portal and public PTAB (Patent Trial and Appeal Board) records.
- Data Preprocessing:
- After collecting raw PDF and text data, the full texts were split into logical sections (
appellant_arguments,examiner_findings,ptab_opinion) using the Gemini-2.5-pro model. - Metadata such as patent numbers, application numbers, and decision dates were aligned for each case into
ptab.json.
- After collecting raw PDF and text data, the full texts were split into logical sections (
Considerations for Using the Data
Social Impact of Dataset
This dataset can assist patent legal experts in decision-making and activate research on the automated analysis of complex patent appeal documents. Using the IRAC structure serves as a foundation for improving the explainability of legal AI.
Limitations and Bias
- Domain Specificity: This dataset is limited to U.S. PTAB Ex parte appeals; caution is needed when generalizing to other jurisdictions or legal fields (e.g., criminal, civil).
- Preprocessing Noise: Since an LLM (Gemini) was used for section splitting, there may be rare instances of split errors or noise.
Additional Information
Citation Information
If you use this dataset or code in your research, please cite:
@inproceedings{jang2025pilotbench,
title = {PILOT-Bench: A Benchmark for Legal Reasoning in the Patent Domain with IRAC-Aligned Classification Tasks},
author = {Yehoon Jang and Chaewon Lee and Hyun-seok Min and Sungchul Choi},
year = {2025},
booktitle = {Proceedings of the EMNLP 2025 (NLLP Workshop)},
url = {[https://github.com/TeamLab/pilot-bench](https://github.com/TeamLab/pilot-bench)}
}
License & Disclaimer
License
This dataset is provided under the Creative Commons Attribution 4.0 International (CC BY 4.0) license.
Disclaimer
- Research and Education: This dataset is constructed purely for research and educational purposes.
- Legal Liability: The analysis results from this dataset do not have legal effect and should not be used as a tool for legal advice, automated adjudication, or practical PTAB decision-making.
- Data Quality: The research team is not responsible for potential noise arising from the LLM-based splitting process.
Acknowledgments
This work was supported by
- National Research Foundation of Korea (NRF) – Grant No. RS-2024-00354675 (70%)
- IITP (ICT Challenge and Advanced Network of HRD) – Grant No. IITP-2023-RS-2023-00259806 (30%) under the supervision of the Ministry of Science and ICT (MSIT), Korea.
- Downloads last month
- 31