|
|
--- |
|
|
license: other |
|
|
task_categories: |
|
|
- text-generation |
|
|
language: |
|
|
- en |
|
|
tags: |
|
|
- limitations |
|
|
- peer-review |
|
|
pretty_name: Limitation_dataset_BAGELS |
|
|
size_categories: |
|
|
- 10K<n<100K |
|
|
--- |
|
|
|
|
|
|
|
|
overview: | |
|
|
Limitation_dataset_BAGELS is a structured corpus of JSON files drawn from ACL 2023 (3,013 papers), |
|
|
ACL 2024 (2,727 papers), and NeurIPS 2021–2022 (7,069 papers). Each record includes title, abstract, |
|
|
and sectionized full text (e.g., Introduction, Related Work, Methodology, Results/Experiments, etc.). |
|
|
As ground truth, ACL 2023 and ACL 2024 contain only author-mentioned limitations, while NeurIPS 2021–2022 |
|
|
contains both author-mentioned limitations and OpenReview-derived reviewer signals. Counts by label: |
|
|
ACL 2023 (2,558 with author limitation, 455 without), ACL 2024 (2,440 with author limitation, 287 without), |
|
|
NeurIPS 2021–2022 (2,830 with author limitation and/or OpenReview signals, 4,239 without). The dataset |
|
|
supports limitations detection, span extraction/summarization, retrieval & QA over scholarly articles, |
|
|
and alignment analyses between author-stated limitations and reviewer feedback. |
|
|
|
|
|
|
|
|
```markdown |
|
|
## Dataset at a glance |
|
|
|
|
|
| Subset | # Papers | With ground truth | Without ground truth | Ground-truth definition | |
|
|
|---------------------|---------:|------------------:|---------------------:|-------------------------------------------------------------------------------| |
|
|
| ACL 2023 | 3,013 | 2,558 | 455 | Author-mentioned limitation (Limitation) present. | |
|
|
| ACL 2024 | 2,727 | 2,440 | 287 | Author-mentioned limitation (Limitation) present. | |
|
|
| NeurIPS 2021–2022 | 7,069 | 2,830 | 4,239 | Author-mentioned limitation (Limitations Refined) and OpenReview-derived reviewer comment (Reviewer Comment). | |
|
|
| PeerJ | 774 | 774 | 0 | Author-mentioned limitation (Limitations Refined) and OpenReview-derived reviewer comment (Reviewer Comment). | |
|
|
| **Total** | **13,583** | **8,602** | **4,981** | | |
|
|
``` |
|
|
|
|
|
## Dataset Structure |
|
|
This dataset is organized into several folders based on the source venue (ACL, NeurIPS, PeerJ) and the specific data included for each paper. |
|
|
|
|
|
### Total Limitations |
|
|
- Each paper has 8 limitations (average) and 10 limitations from openreview (average). So roughly 39,984 (ACL) + 50,940 (NeurIPS, Lim + OpenReview) + 33912 (NeurIPS, Lim) + 6192 (PeerJ) = 131,028 |
|
|
|
|
|
|
|
|
### ACL Data (2023-2024) |
|
|
- ACL_23_with_limitation: Contains ACL 2023 papers that include a dedicated "Limitations" section. |
|
|
|
|
|
- ACL_23_no_limitation: Contains ACL 2023 papers that do not include a dedicated "Limitations" section. |
|
|
|
|
|
- ACL_24_with_limitation: Contains ACL 2024 papers that include a dedicated "Limitations" section. |
|
|
|
|
|
- ACL_24_no_limitation: Contains ACL 2024 papers that do not include a dedicated "Limitations" section. |
|
|
|
|
|
### NeurIPS Data (2021-2022) |
|
|
- Neurips_21_22_Limitation_OpenReview: Contains NeurIPS 2021-2022 papers. The ground truth for each paper combines author-stated limitations with critiques from OpenReview. |
|
|
|
|
|
- Neurips_21_22_no_openreview: Contains NeurIPS 2021-2022 papers where the ground truth consists only of author-stated limitations. |
|
|
|
|
|
- NeurIPS_with_cited: Our most comprehensive dataset. Contains the NeurIPS 2021-2022 papers along with their full citation network ("Cited In" and "Cited By" papers). The ground truth includes both author-stated limitations and OpenReview critiques. |
|
|
|
|
|
### PeerJ |
|
|
- peerj_json_files: Contains PeerJ papers. The ground truth for each paper combines author-stated limitations with critiques from OpenReview. |
|
|
|
|
|
|
|
|
## Schema |
|
|
|
|
|
### ACL 2023 & ACL 2024 |
|
|
- `File Number` *(string)* |
|
|
- `Title` *(string)* |
|
|
- `Limitation` *(string)* (Author mentioned Limitation, using for ground truth) |
|
|
- `abstractText` *(string)* |
|
|
- Section keys *(strings)*, e.g.: `"1 Introduction"`, `"2 Related Work"`, `"3 Methodology"`, `"Results and Experiments"`, `"Data"`, `"Other sections"` |
|
|
|
|
|
|
|
|
### NeurIPS 2021–2022 |
|
|
- `File Number` *(string)* |
|
|
- `Title` *(string)* |
|
|
- `Limitation` *(string)* (Author mentioned Limitation) |
|
|
- `Limitation Refined` *(string)* (Author mentioned Limitation after removing noisy sentences from other sections, using for ground truth) |
|
|
- `Reviewer Comment` *(string)* — concatenation of reviewer limitation excerpts, formatted per reviewer, using for ground truth |
|
|
- `Reviewer Summary` *(string)* — concatenation of reviewer summaries, formatted per reviewer |
|
|
- `abstractText` *(string)* |
|
|
- Section keys *(strings)*, e.g.: `"1 Introduction"`, `"2 Related Work"`, `"3 Methodology"`, `"Results and Experiments"`, `"Data"`, `"Other sections"` |
|
|
- `Author mentioned Limitation` *(string)* — extracted span(s) |
|
|
|
|
|
### NeurIPS_with_cited |
|
|
- Each JSON file in this directory represents a single NeurIPS paper and its associated data, with the following schema: |
|
|
|
|
|
- `file_id` (string): A unique identifier for the paper (e.g., "neurips_1048"). |
|
|
|
|
|
- `title` (string): The title of the paper. |
|
|
|
|
|
- `abstract` (string): The abstract of the paper. |
|
|
|
|
|
- `author_limitations_gt` (string): The refined, author-stated limitations, which serve as one component of the ground truth. |
|
|
|
|
|
- `reviewer_limitations_gt` (string): A concatenation of limitation-related excerpts from OpenReview comments, serving as the other component of the ground truth. |
|
|
|
|
|
- `full_text` (dict): A dictionary containing the full text of the paper, where keys are section titles (e.g., "1 Introduction") and values are the text of that section. |
|
|
|
|
|
- `cited_in` (list): A list of papers from the input paper's bibliography (i.e., papers it cites). Each element in the list is an object containing the cited paper's text. |
|
|
|
|
|
- `cited_by` (list): A list of papers that cite the input paper. Each element is an object containing the citing paper's text. |
|
|
|
|
|
### PeerJ |
|
|
- `v1_Abstract': Abstract of the paper from version 1. (initial submission is version 1) |
|
|
- 'v1_Introduction': Introduction of the paper from version 1. |
|
|
- ...... |
|
|
- `v2_Abstract': Abstract of the paper from version 2. (re-submission from the author is version 2) |
|
|
- 'v2_Introduction': Introduction of the paper from version 2. |
|
|
- ....... |
|
|
- `review1': Review from reviewer 1 |
|
|
- 'review2': Review from reviewer 1 |
|
|
- `review3': Review from reviewer 3 |
|
|
- 'review4': Review from reviewer 4 |
|
|
- ........ |
|
|
- `pdf_1': pdf link for version 1 |
|
|
- 'pdf_2': pdf link for version 2 |
|
|
- .... |
|
|
- 'all_reviews': concatenation of all reviews |
|
|
- 'LLM_extracted_review': LLM extracted version of review |
|
|
|
|
|
|
|
|
```markdown |
|
|
pipeline: |
|
|
Step 1: "Ground Truth Extraction Pipeline" |
|
|
description: | |
|
|
We parse each paper with ScienceParse to recover structured sections (title, abstract, and all headings/body text), and we collect peer-review content from |
|
|
OpenReview using a Selenium scraper. For Limitations extraction, we first look for a dedicated section whose heading contains “Limitation” or “Limitations” and take |
|
|
that section verbatim. If no such section exists, we scan the paper (except Abstract, Introduction, and Related Work sections) for the first sentence containing |
|
|
“limitation”/“limitations” (case-insensitive) and extract text from that sentence onward, but stop as soon as we encounter a boundary keyword to avoid unrelated |
|
|
material. The boundary keywords we use are: ethics, ethical statement, discussion/discussions, conclusion, grant, and appendix. This simple heuristic keeps the |
|
|
extracted spans focused on genuine limitations while minimizing boilerplate. |
|
|
|
|
|
``` |
|
|
|
|
|
```markdown |
|
|
Step 2: "Ground Truth Re-Extraction Pipeline (GPT-4o mini)" |
|
|
description: | |
|
|
We standardize limitation signals by running each paper through an extract-only pipeline. First, we take the |
|
|
author-mentioned Limitation text and the Reviewer Comment fields from the JSON. Each source is sent to |
|
|
GPT-4o mini with a strict “no paraphrasing” prompt to return verbatim limitation spans (author → limitations_author_extracted, |
|
|
reviewer → limitations_reviewer_extracted). We then pass both lists to a master GPT-4o mini step that deduplicates |
|
|
near-identical spans. This step also preserves provenance, marking whether a consolidated span came |
|
|
from the author, reviewers, or both. The final merged list is saved as limitations_consolidated. |
|
|
steps: |
|
|
- "Inputs: Author 'Limitation'; Reviewer Comment." |
|
|
- "Author extractor: GPT-4o mini returns verbatim limitation spans with source='author'." |
|
|
- "Reviewer extractor: GPT-4o mini returns verbatim limitation spans with source='reviewer'." |
|
|
- "Master consolidation (no generation): deduplicate/merge near-duplicates; pick an existing span; keep provenance." |
|
|
- "Outputs: limitations_author_extracted, limitations_reviewer_extracted, limitations_consolidated." |
|
|
``` |
|
|
|
|
|
```markdown |
|
|
- **For ACL** |
|
|
'Limitation' ──> GPT-4o mini Extractor ──> limitations_author_extracted |
|
|
- `limitations_author_extracted` (Ground truth limitation) |
|
|
|
|
|
- **For NeurIPS and PeerJ** |
|
|
-'Limitation' ──> GPT-4o mini Extractor ──> limitations_author_extracted |
|
|
- 'Reviewer Comment' ──> GPT-4o mini Extractor ──> limitations_reviewer_extracted |
|
|
- limitations_author_extracted + limitations_reviewer_extracted ──> GPT-4o mini Merger ──> limitations_consolidated (Ground truth limitation) |
|
|
|
|
|
``` |
|
|
|
|
|
Intended uses: |
|
|
- "This dataset is useful for text generation, such as generating limitations (or other sections) to evaluate the model-generated text with ground truth." |
|
|
Also, this dataset can be used for |
|
|
- "Binary classification: detect whether a paper includes an explicit limitation (author/reviewer)." |
|
|
- "Retrieval & QA: retrieve limitation passages given a query (paper, section, topic)." |
|
|
- "Author–reviewer alignment: compare author-stated limitations vs reviewer-raised shortcomings." |
|
|
|
|
|
Suggested metrics: |
|
|
- "We highly suggest to use our PointWise Evaluation approach to measure the performance between Ground truth and model generated text. (see the Citation section |
|
|
for paper)" |
|
|
|
|
|
Other suggested metrics: |
|
|
- "ROUGE 1,2,L, BERTScore, BLEU, Cosine Similarity, Jaccard Simlarity" |
|
|
- LLM as a Judge (for Coherence, Faithfulness, Readability, Grammar, Overall Performance) |
|
|
- "F1 / macro-F1 (classification)" |
|
|
- "ROUGE / BERTScore (generation)" |
|
|
- "nDCG / MRR (retrieval)" |
|
|
|
|
|
curation processing notes: |
|
|
- "PDFs were parsed and sectionized; headings preserved verbatim (e.g., '1 Introduction')." |
|
|
- "Author-side limitation spans prioritized; reviewer-side text aggregates multi-reviewer fields (Reviewer_1, Reviewer_2, …)." |
|
|
- "Heuristics avoid false positives (e.g., ignoring sentences that start with prompts like 'Did you …')." |
|
|
|
|
|
## Examples |
|
|
|
|
|
### ACL 2023 / ACL 2024 |
|
|
```json |
|
|
{ |
|
|
"File Number": "123", |
|
|
"Title": "Example Paper Title", |
|
|
"Limitations": "Our study is limited by dataset size and domain coverage ...", (GPT 4o mini is used to get ground truth) |
|
|
"abstractText": "We study ...", |
|
|
"1 Introduction": " ... ", |
|
|
"2 Related Work": " ... ", |
|
|
"3 Methodology": " ... " |
|
|
} |
|
|
``` |
|
|
|
|
|
```json |
|
|
### NeurIPS 2021–2022 |
|
|
{ |
|
|
"File Number": "123", |
|
|
"Title": "Example Paper Title", |
|
|
"Limitations": "Due to the lack of access, a major limitation of our study ...", |
|
|
"Limitations Refined": "Due to the lack of access ...", (GPT 4o mini is used to get ground truth) |
|
|
"Reviewer Comment": "Reviewer_2: I totally agree..., Reviewer_3: The work provides....", (GPT 4o mini is used to get ground truth) |
|
|
"abstractText": "We study ...", |
|
|
"1 Introduction": " ... ", |
|
|
"2 Related Work": " ... ", |
|
|
"3 Methodology": " ... " |
|
|
} |
|
|
|
|
|
``` |
|
|
|
|
|
```json |
|
|
|
|
|
### NeurIPS with cited |
|
|
{ |
|
|
"file_id": "neurips_123", |
|
|
"Title": "Example Paper Title", |
|
|
"Limitations": "Due to the lack of access, a major limitation of our study ...", |
|
|
"Limitations Refined": "Due to the lack of access ...", (GPT 4o mini is used to get ground truth) |
|
|
"Reviewer Comment": "Reviewer_2: I totally agree..., Reviewer_3: The work provides....", (GPT 4o mini is used to get ground truth) |
|
|
"abstractText": "We study ...", |
|
|
"1 Introduction": " ... ", |
|
|
"2 Related Work": " ... ", |
|
|
"3 Methodology": " ... " |
|
|
|
|
|
"cited_in": [ |
|
|
{ |
|
|
"title": "A Foundational Paper Referenced by the Main Paper", |
|
|
"abstract": "This paper introduces the concept of...", |
|
|
"full_text": {"Introduction":"This is intro1", "Related Work":"This is related work1", "Methodology":"This is methodology1" } |
|
|
}, |
|
|
{ |
|
|
"title": "Another Paper Referenced by the Main Paper", |
|
|
"abstract": "This paper introduces the concept of...", |
|
|
"full_text": {"Introduction":"This is intro2", "Related Work":"This is related work2", "Methodology":"This is methodology2" } |
|
|
} |
|
|
], |
|
|
"cited_by": [ |
|
|
{ |
|
|
"title": "An Analysis of Agent-Based Reasoning (Citing Paper 1)", |
|
|
"abstract": "Building on the work from our main paper, this paper explores...", |
|
|
"full_text": {"Introduction":"This is intro3", "Related Work":"This is related work3", "Methodology":"This is methodology3" } |
|
|
}, |
|
|
{ |
|
|
"title": "Another Citing Paper (Citing Paper 2)", |
|
|
"abstract": "Further analysis of the framework shows...", |
|
|
"full_text": {"Introduction":"This is intro4", "Related Work":"This is related work4", "Methodology":"This is methodology4" } |
|
|
} |
|
|
] |
|
|
} |
|
|
|
|
|
``` |
|
|
|
|
|
```json |
|
|
### PeerJ |
|
|
{ |
|
|
"v1_abstract": "Paper's abstract from version 1 (initial submission)", |
|
|
"v1_Introduction": "Paper's introduction from version 1 (initial submission)", |
|
|
"v1_Result_and_Experiments": "Paper's results and experiments from version 1 (initial submission)", |
|
|
"v1_text": "all other sections text (initial submission)", |
|
|
...... |
|
|
"v2_abstract": "Paper's abstract from version 2 (re-submission version)", |
|
|
"v2_Introduction": "Paper's introduction from version 2 (re-submission version)" |
|
|
"v2_Result_and_Experiments": "Paper's results and experiments from version 2 (re-submission)", |
|
|
"v2_text": "all other sections text (re-submission)", |
|
|
..... |
|
|
"Review1": "Review from reviewer 1", |
|
|
"Review2": "Review from reviewer 2", |
|
|
..... |
|
|
"all_reviews": "all concatenated reviews", |
|
|
"pdf_1": "pdf link for version 1", |
|
|
"pdf_2": "pdf link for version 2", |
|
|
"LLM_extracted_review": "Reviews by LLM extracted", |
|
|
} |
|
|
``` |
|
|
|
|
|
|
|
|
## Citation |
|
|
This dataset is related to the following work. The code of this work is available [BAGELS_Limitation_Gen on GitHub](https://github.com/IbrahimAlAzhar/BAGELS_Limitation_Gen). |
|
|
|
|
|
|
|
|
```markdown |
|
|
Azher, Ibrahim Al; Mokarrama, Miftahul Jannat; Guo, Zhishuai; Choudhury, Sagnik Ray; Alhoori, Hamed (2025). *BAGELS: Benchmarking the Automated Generation and Extraction of Limitations from Scholarly Text*. arXiv preprint arXiv:2505.18207. |
|
|
``` |
|
|
|
|
|
This work has been **accepted at EMNLP 2025 (Findings)**. |
|
|
|
|
|
If you use this dataset, please cite: |
|
|
|
|
|
```markdown |
|
|
@article{azher2025bagels, |
|
|
title={BAGELS: Benchmarking the Automated Generation and Extraction of Limitations from Scholarly Text}, |
|
|
author={Azher, Ibrahim Al and Mokarrama, Miftahul Jannat and Guo, Zhishuai and Choudhury, Sagnik Ray and Alhoori, Hamed}, |
|
|
journal={arXiv preprint arXiv:2505.18207}, |
|
|
year={2025} |
|
|
} |
|
|
``` |
|
|
|