Datasets:
File size: 15,415 Bytes
0f2f2d3 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 |
---
license: other
task_categories:
- text-generation
language:
- en
tags:
- limitations
- peer-review
pretty_name: Limitation_dataset_BAGELS
size_categories:
- 10K<n<100K
---
overview: |
Limitation_dataset_BAGELS is a structured corpus of JSON files drawn from ACL 2023 (3,013 papers),
ACL 2024 (2,727 papers), and NeurIPS 2021–2022 (7,069 papers). Each record includes title, abstract,
and sectionized full text (e.g., Introduction, Related Work, Methodology, Results/Experiments, etc.).
As ground truth, ACL 2023 and ACL 2024 contain only author-mentioned limitations, while NeurIPS 2021–2022
contains both author-mentioned limitations and OpenReview-derived reviewer signals. Counts by label:
ACL 2023 (2,558 with author limitation, 455 without), ACL 2024 (2,440 with author limitation, 287 without),
NeurIPS 2021–2022 (2,830 with author limitation and/or OpenReview signals, 4,239 without). The dataset
supports limitations detection, span extraction/summarization, retrieval & QA over scholarly articles,
and alignment analyses between author-stated limitations and reviewer feedback.
```markdown
## Dataset at a glance
| Subset | # Papers | With ground truth | Without ground truth | Ground-truth definition |
|---------------------|---------:|------------------:|---------------------:|-------------------------------------------------------------------------------|
| ACL 2023 | 3,013 | 2,558 | 455 | Author-mentioned limitation (Limitation) present. |
| ACL 2024 | 2,727 | 2,440 | 287 | Author-mentioned limitation (Limitation) present. |
| NeurIPS 2021–2022 | 7,069 | 2,830 | 4,239 | Author-mentioned limitation (Limitations Refined) and OpenReview-derived reviewer comment (Reviewer Comment). |
| PeerJ | 774 | 774 | 0 | Author-mentioned limitation (Limitations Refined) and OpenReview-derived reviewer comment (Reviewer Comment). |
| **Total** | **13,583** | **8,602** | **4,981** | |
```
## Dataset Structure
This dataset is organized into several folders based on the source venue (ACL, NeurIPS, PeerJ) and the specific data included for each paper.
### Total Limitations
- Each paper has 8 limitations (average) and 10 limitations from openreview (average). So roughly 39,984 (ACL) + 50,940 (NeurIPS, Lim + OpenReview) + 33912 (NeurIPS, Lim) + 6192 (PeerJ) = 131,028
### ACL Data (2023-2024)
- ACL_23_with_limitation: Contains ACL 2023 papers that include a dedicated "Limitations" section.
- ACL_23_no_limitation: Contains ACL 2023 papers that do not include a dedicated "Limitations" section.
- ACL_24_with_limitation: Contains ACL 2024 papers that include a dedicated "Limitations" section.
- ACL_24_no_limitation: Contains ACL 2024 papers that do not include a dedicated "Limitations" section.
### NeurIPS Data (2021-2022)
- Neurips_21_22_Limitation_OpenReview: Contains NeurIPS 2021-2022 papers. The ground truth for each paper combines author-stated limitations with critiques from OpenReview.
- Neurips_21_22_no_openreview: Contains NeurIPS 2021-2022 papers where the ground truth consists only of author-stated limitations.
- NeurIPS_with_cited: Our most comprehensive dataset. Contains the NeurIPS 2021-2022 papers along with their full citation network ("Cited In" and "Cited By" papers). The ground truth includes both author-stated limitations and OpenReview critiques.
### PeerJ
- peerj_json_files: Contains PeerJ papers. The ground truth for each paper combines author-stated limitations with critiques from OpenReview.
## Schema
### ACL 2023 & ACL 2024
- `File Number` *(string)*
- `Title` *(string)*
- `Limitation` *(string)* (Author mentioned Limitation, using for ground truth)
- `abstractText` *(string)*
- Section keys *(strings)*, e.g.: `"1 Introduction"`, `"2 Related Work"`, `"3 Methodology"`, `"Results and Experiments"`, `"Data"`, `"Other sections"`
### NeurIPS 2021–2022
- `File Number` *(string)*
- `Title` *(string)*
- `Limitation` *(string)* (Author mentioned Limitation)
- `Limitation Refined` *(string)* (Author mentioned Limitation after removing noisy sentences from other sections, using for ground truth)
- `Reviewer Comment` *(string)* — concatenation of reviewer limitation excerpts, formatted per reviewer, using for ground truth
- `Reviewer Summary` *(string)* — concatenation of reviewer summaries, formatted per reviewer
- `abstractText` *(string)*
- Section keys *(strings)*, e.g.: `"1 Introduction"`, `"2 Related Work"`, `"3 Methodology"`, `"Results and Experiments"`, `"Data"`, `"Other sections"`
- `Author mentioned Limitation` *(string)* — extracted span(s)
### NeurIPS_with_cited
- Each JSON file in this directory represents a single NeurIPS paper and its associated data, with the following schema:
- `file_id` (string): A unique identifier for the paper (e.g., "neurips_1048").
- `title` (string): The title of the paper.
- `abstract` (string): The abstract of the paper.
- `author_limitations_gt` (string): The refined, author-stated limitations, which serve as one component of the ground truth.
- `reviewer_limitations_gt` (string): A concatenation of limitation-related excerpts from OpenReview comments, serving as the other component of the ground truth.
- `full_text` (dict): A dictionary containing the full text of the paper, where keys are section titles (e.g., "1 Introduction") and values are the text of that section.
- `cited_in` (list): A list of papers from the input paper's bibliography (i.e., papers it cites). Each element in the list is an object containing the cited paper's text.
- `cited_by` (list): A list of papers that cite the input paper. Each element is an object containing the citing paper's text.
### PeerJ
- `v1_Abstract': Abstract of the paper from version 1. (initial submission is version 1)
- 'v1_Introduction': Introduction of the paper from version 1.
- ......
- `v2_Abstract': Abstract of the paper from version 2. (re-submission from the author is version 2)
- 'v2_Introduction': Introduction of the paper from version 2.
- .......
- `review1': Review from reviewer 1
- 'review2': Review from reviewer 1
- `review3': Review from reviewer 3
- 'review4': Review from reviewer 4
- ........
- `pdf_1': pdf link for version 1
- 'pdf_2': pdf link for version 2
- ....
- 'all_reviews': concatenation of all reviews
- 'LLM_extracted_review': LLM extracted version of review
```markdown
pipeline:
Step 1: "Ground Truth Extraction Pipeline"
description: |
We parse each paper with ScienceParse to recover structured sections (title, abstract, and all headings/body text), and we collect peer-review content from
OpenReview using a Selenium scraper. For Limitations extraction, we first look for a dedicated section whose heading contains “Limitation” or “Limitations” and take
that section verbatim. If no such section exists, we scan the paper (except Abstract, Introduction, and Related Work sections) for the first sentence containing
“limitation”/“limitations” (case-insensitive) and extract text from that sentence onward, but stop as soon as we encounter a boundary keyword to avoid unrelated
material. The boundary keywords we use are: ethics, ethical statement, discussion/discussions, conclusion, grant, and appendix. This simple heuristic keeps the
extracted spans focused on genuine limitations while minimizing boilerplate.
```
```markdown
Step 2: "Ground Truth Re-Extraction Pipeline (GPT-4o mini)"
description: |
We standardize limitation signals by running each paper through an extract-only pipeline. First, we take the
author-mentioned Limitation text and the Reviewer Comment fields from the JSON. Each source is sent to
GPT-4o mini with a strict “no paraphrasing” prompt to return verbatim limitation spans (author → limitations_author_extracted,
reviewer → limitations_reviewer_extracted). We then pass both lists to a master GPT-4o mini step that deduplicates
near-identical spans. This step also preserves provenance, marking whether a consolidated span came
from the author, reviewers, or both. The final merged list is saved as limitations_consolidated.
steps:
- "Inputs: Author 'Limitation'; Reviewer Comment."
- "Author extractor: GPT-4o mini returns verbatim limitation spans with source='author'."
- "Reviewer extractor: GPT-4o mini returns verbatim limitation spans with source='reviewer'."
- "Master consolidation (no generation): deduplicate/merge near-duplicates; pick an existing span; keep provenance."
- "Outputs: limitations_author_extracted, limitations_reviewer_extracted, limitations_consolidated."
```
```markdown
- **For ACL**
'Limitation' ──> GPT-4o mini Extractor ──> limitations_author_extracted
- `limitations_author_extracted` (Ground truth limitation)
- **For NeurIPS and PeerJ**
-'Limitation' ──> GPT-4o mini Extractor ──> limitations_author_extracted
- 'Reviewer Comment' ──> GPT-4o mini Extractor ──> limitations_reviewer_extracted
- limitations_author_extracted + limitations_reviewer_extracted ──> GPT-4o mini Merger ──> limitations_consolidated (Ground truth limitation)
```
Intended uses:
- "This dataset is useful for text generation, such as generating limitations (or other sections) to evaluate the model-generated text with ground truth."
Also, this dataset can be used for
- "Binary classification: detect whether a paper includes an explicit limitation (author/reviewer)."
- "Retrieval & QA: retrieve limitation passages given a query (paper, section, topic)."
- "Author–reviewer alignment: compare author-stated limitations vs reviewer-raised shortcomings."
Suggested metrics:
- "We highly suggest to use our PointWise Evaluation approach to measure the performance between Ground truth and model generated text. (see the Citation section
for paper)"
Other suggested metrics:
- "ROUGE 1,2,L, BERTScore, BLEU, Cosine Similarity, Jaccard Simlarity"
- LLM as a Judge (for Coherence, Faithfulness, Readability, Grammar, Overall Performance)
- "F1 / macro-F1 (classification)"
- "ROUGE / BERTScore (generation)"
- "nDCG / MRR (retrieval)"
curation processing notes:
- "PDFs were parsed and sectionized; headings preserved verbatim (e.g., '1 Introduction')."
- "Author-side limitation spans prioritized; reviewer-side text aggregates multi-reviewer fields (Reviewer_1, Reviewer_2, …)."
- "Heuristics avoid false positives (e.g., ignoring sentences that start with prompts like 'Did you …')."
## Examples
### ACL 2023 / ACL 2024
```json
{
"File Number": "123",
"Title": "Example Paper Title",
"Limitations": "Our study is limited by dataset size and domain coverage ...", (GPT 4o mini is used to get ground truth)
"abstractText": "We study ...",
"1 Introduction": " ... ",
"2 Related Work": " ... ",
"3 Methodology": " ... "
}
```
```json
### NeurIPS 2021–2022
{
"File Number": "123",
"Title": "Example Paper Title",
"Limitations": "Due to the lack of access, a major limitation of our study ...",
"Limitations Refined": "Due to the lack of access ...", (GPT 4o mini is used to get ground truth)
"Reviewer Comment": "Reviewer_2: I totally agree..., Reviewer_3: The work provides....", (GPT 4o mini is used to get ground truth)
"abstractText": "We study ...",
"1 Introduction": " ... ",
"2 Related Work": " ... ",
"3 Methodology": " ... "
}
```
```json
### NeurIPS with cited
{
"file_id": "neurips_123",
"Title": "Example Paper Title",
"Limitations": "Due to the lack of access, a major limitation of our study ...",
"Limitations Refined": "Due to the lack of access ...", (GPT 4o mini is used to get ground truth)
"Reviewer Comment": "Reviewer_2: I totally agree..., Reviewer_3: The work provides....", (GPT 4o mini is used to get ground truth)
"abstractText": "We study ...",
"1 Introduction": " ... ",
"2 Related Work": " ... ",
"3 Methodology": " ... "
"cited_in": [
{
"title": "A Foundational Paper Referenced by the Main Paper",
"abstract": "This paper introduces the concept of...",
"full_text": {"Introduction":"This is intro1", "Related Work":"This is related work1", "Methodology":"This is methodology1" }
},
{
"title": "Another Paper Referenced by the Main Paper",
"abstract": "This paper introduces the concept of...",
"full_text": {"Introduction":"This is intro2", "Related Work":"This is related work2", "Methodology":"This is methodology2" }
}
],
"cited_by": [
{
"title": "An Analysis of Agent-Based Reasoning (Citing Paper 1)",
"abstract": "Building on the work from our main paper, this paper explores...",
"full_text": {"Introduction":"This is intro3", "Related Work":"This is related work3", "Methodology":"This is methodology3" }
},
{
"title": "Another Citing Paper (Citing Paper 2)",
"abstract": "Further analysis of the framework shows...",
"full_text": {"Introduction":"This is intro4", "Related Work":"This is related work4", "Methodology":"This is methodology4" }
}
]
}
```
```json
### PeerJ
{
"v1_abstract": "Paper's abstract from version 1 (initial submission)",
"v1_Introduction": "Paper's introduction from version 1 (initial submission)",
"v1_Result_and_Experiments": "Paper's results and experiments from version 1 (initial submission)",
"v1_text": "all other sections text (initial submission)",
......
"v2_abstract": "Paper's abstract from version 2 (re-submission version)",
"v2_Introduction": "Paper's introduction from version 2 (re-submission version)"
"v2_Result_and_Experiments": "Paper's results and experiments from version 2 (re-submission)",
"v2_text": "all other sections text (re-submission)",
.....
"Review1": "Review from reviewer 1",
"Review2": "Review from reviewer 2",
.....
"all_reviews": "all concatenated reviews",
"pdf_1": "pdf link for version 1",
"pdf_2": "pdf link for version 2",
"LLM_extracted_review": "Reviews by LLM extracted",
}
```
## Citation
This dataset is related to the following work. The code of this work is available [BAGELS_Limitation_Gen on GitHub](https://github.com/IbrahimAlAzhar/BAGELS_Limitation_Gen).
```markdown
Azher, Ibrahim Al; Mokarrama, Miftahul Jannat; Guo, Zhishuai; Choudhury, Sagnik Ray; Alhoori, Hamed (2025). *BAGELS: Benchmarking the Automated Generation and Extraction of Limitations from Scholarly Text*. arXiv preprint arXiv:2505.18207.
```
This work has been **accepted at EMNLP 2025 (Findings)**.
If you use this dataset, please cite:
```markdown
@article{azher2025bagels,
title={BAGELS: Benchmarking the Automated Generation and Extraction of Limitations from Scholarly Text},
author={Azher, Ibrahim Al and Mokarrama, Miftahul Jannat and Guo, Zhishuai and Choudhury, Sagnik Ray and Alhoori, Hamed},
journal={arXiv preprint arXiv:2505.18207},
year={2025}
}
```
|