Datasets:
Tasks:
Text Generation
Modalities:
Text
Formats:
json
Languages:
English
Size:
10K - 100K
ArXiv:
License:
metadata
pretty_name: RMR-75K
language:
- en
license: mit
task_categories:
- text-generation
tags:
- datasets
- peer-review
- scientific-text
π RMR-75K
π RMR-75K (Review-Map-Rebuttal) is a large-scale segment-level mapping dataset that links review weakness/question key points to the specific rebuttal span that addresses them, and annotates each pair with
- a review perspective label (7 categories) and
- a rebuttal impact category (5 levels) reflecting the authorβs reaction and degree of uptake.
π Dataset size
- Total mappings: 75,542
- Total papers: 4,825
- Distinct reviews: 16,583
- Avg. mappings per paper: 15.66
- Avg. mappings per review: 4.56
- Conference source: ICLR 2024
π Data format
Each line is a JSON object (JSONL). One object corresponds to one mapped review key point and its aligned rebuttal response span, with labels.
π Fields
paper_titleThe paper title.paper_idThe OpenReview submission id.conferenceThe source venue and year, for exampleICLR-2024.review_idIdentifier of the review the segment comes from.weakness_contentThe atomic weakness or question segment extracted from the review.perspectiveOne of 7 review perspective labels.rebuttal_contentThe rebuttal span that addressesweakness_content.rebuttal_labelOne of 5 rebuttal impact categories.
π§ͺ Example
{
"paper_title": "Breaking Physical and Linguistic Borders: Multilingual Federated Prompt Tuning for Low-Resource Languages",
"paper_id": "zzqn5G9fjn",
"conference": "ICLR-2024",
"review_id": "UQfBBoocAY",
"weakness_content": "Although the paper is generally well-structured, the title mentions `low-resource` languages ... I would suggest ... include more tasks ... MasakhaNEWS ...",
"perspective": "Experiments",
"rebuttal_content": "Thank you for recommending these excellent datasets for our evaluation. ... we have initiated experiments with MasakhaNEWS ... Table 2 ...",
"rebuttal_label": "CRP"
}
π§ Label taxonomy
Review perspective labels (7)
Each review segment has exactly one perspective label:
| Perspective | Definition (brief) |
|---|---|
| Experiments | Experimental setup/design: missing/insufficient experiments, weak baselines, missing ablations, unclear datasets/splits, hyperparameters/seeds, compute/training details. |
| Evaluation | Metrics/analysis/interpretation: missing or inappropriate metrics, lack of statistical testing or error bars, insufficient analysis, inconsistencies between claims and results. |
| Reproducibility | Reproducibility details: missing code/data/links, missing hyperparameters, unclear preprocessing, seeds, hardware, insufficient instructions to replicate results. |
| Novelty | Originality/positioning vs prior work: incremental contribution, overlap, unclear differentiation, missing related work. |
| Theory | Theoretical correctness/justification: flawed assumptions, gaps in proofs, incorrect derivations, mismatch between theorems and algorithms. |
| Writing | Clarity/readability: grammar/style, ambiguous phrasing, undefined terms/symbols, confusing explanations. |
| Presentation | Figures/tables/organization: unclear plots/legends, formatting issues, misplaced/redundant content, overall structure hard to follow. |
Rebuttal impact categories (5)
Each aligned rebuttal span has exactly one impact label:
| Label | Meaning (brief) |
|---|---|
| CRP | Concrete Revision Performed: authors point to specific changes or verifiable artifacts already added. |
| SRP | Specific Revision Plan: concrete future edits are committed with where/what to revise, but not yet implemented. |
| VCR | Vague Commitment to Revise: promises to improve without actionable details. |
| DWC | Defend Without Change: argues the paper already addresses the point; no edits proposed. |
| DRF | Deflect/Reframe: shifts responsibility or reframes the issue; no change offered. |
π Label distribution (RMR-75K)
Counts and percentages for Perspective Γ Impact:
| Perspective (total) | CRP | SRP | VCR | DWC | DRF |
|---|---|---|---|---|---|
| Evaluation (11,257) | 4,766 (42.3%) | 903 (8.0%) | 171 (1.5%) | 5,249 (46.6%) | 168 (1.5%) |
| Experiments (25,160) | 12,059 (47.9%) | 2,272 (9.0%) | 401 (1.6%) | 9,833 (39.1%) | 595 (2.4%) |
| Novelty (8,585) | 2,828 (32.9%) | 872 (10.2%) | 185 (2.2%) | 4,578 (53.3%) | 122 (1.4%) |
| Presentation (4,776) | 2,894 (60.6%) | 803 (16.8%) | 256 (5.4%) | 784 (16.4%) | 39 (0.8%) |
| Reproducibility (4,402) | 2,009 (45.6%) | 465 (10.6%) | 120 (2.7%) | 1,747 (39.7%) | 61 (1.4%) |
| Theory (12,822) | 4,253 (33.2%) | 1,110 (8.7%) | 282 (2.2%) | 6,859 (53.5%) | 318 (2.5%) |
| Writing (8,540) | 4,693 (55.0%) | 1,149 (13.5%) | 631 (7.4%) | 1,997 (23.4%) | 70 (0.8%) |
| Overall | 33,502 (44.3%) | 7,574 (10.0%) | 2,046 (2.7%) | 31,047 (41.1%) | 1,373 (1.8%) |
π― Intended use
RMR-75K is designed for:
- training and evaluating perspective-conditioned review feedback generation
- leveraging rebuttal outcomes as weak supervision for multiple dimensions such as actionability
- studying the relationship between review and rebuttal responses
π Citation
If you find this dataset useful in your research, please cite:
@misc{wu2026rbtactrebuttalsupervisionactionable,
title={RbtAct: Rebuttal as Supervision for Actionable Review Feedback Generation},
author={Sihong Wu and Yiling Ma and Yilun Zhao and Tiansheng Hu and Owen Jiang and Manasi Patwardhan and Arman Cohan},
year={2026},
eprint={2603.09723},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2603.09723},
}