Datasets:
forum_id stringlengths 9 13 | conference stringclasses 1
value | year int32 2.02k 2.02k ⌀ | track stringclasses 3
values | venue_id stringclasses 6
values | paper_number int32 1 29.3k | title stringlengths 14 183 | abstract stringlengths 246 3.6k | authors listlengths 0 56 | keywords listlengths 0 32 | tldr stringlengths 0 250 | primary_area stringclasses 38
values | venue stringclasses 19
values | decision stringclasses 8
values | decision_comment stringlengths 0 9.28k | author_rebuttal stringlengths 0 6k | num_reviews int32 2 8 | reviews_json stringlengths 1.96k 78.7k | markdown stringlengths 0 1.05M | markdown_chars int64 0 1.05M |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
zsq86HNvXr6 | neurips | 2,021 | main | NeurIPS.cc/2021/Conference | 5,843 | Last iterate convergence of SGD for Least-Squares in the Interpolation regime. | Motivated by the recent successes of neural networks that have the ability to fit the data perfectly \emph{and} generalize well, we study the noiseless model in the fundamental least-squares setup. We assume that an optimum predictor perfectly fits the inputs and outputs $\langle \theta_* , \phi(X) \rangle = Y$, where ... | [
"Aditya Vardhan Varre",
"Loucas Pillaud-Vivien",
"Nicolas Flammarion"
] | [
"Interpolation",
"Least-Squares",
"SGD",
"Last iterate",
"Non-parametric rates."
] | NeurIPS 2021 Poster | Accept (Poster) | This paper proves the risk bound of the last iterate for constant step size SGD in the interpolation regime. The main concern from the reviewers is that the linear regression model is assumed to be noiseless, which makes the results less interesting. After the author response and reviewer discussion, the paper gathers ... | 4 | [{"review_id": "pvBSopsbAgL", "reviewer": "Reviewer_Mkow", "summary": "The paper explores the last iterate convergence of SGD on the least squares problem in the interpolation regime (noiseless setting). The authors work in an abstract Hilbert space and thus consider (1) the non-strongly convex setting and (2) the stre... | Last iterate convergence of SGD for Least-Squares in the Interpolation regime
Aditya Varre
Loucas Pillaud-Vivien
EPFL aditya. var.epaaa.reeeepl..co
EPFL
loucas www.a..ppllladdd...aaeael.....
Nicolas Flammarion
EPFL
nicolas www.mamioo@epplllc...m
Abstract
Motivated by the recent successes of neural networks that have th... | 42,221 | |||
zOngaSKrElL | neurips | 2,021 | main | NeurIPS.cc/2021/Conference | 4,410 | Self-Supervised Bug Detection and Repair | Machine learning-based program analyses have recently shown the promise of integrating formal and probabilistic reasoning towards aiding software development. However, in the absence of large annotated corpora, training these analyses is challenging. Towards addressing this, we present BugLab, an approach for self-supe... | [
"Miltiadis Allamanis",
"Henry Richard Jackson-Flux",
"Marc Brockschmidt"
] | [
"ml4code",
"bug detection",
"gnn"
] | NeurIPS 2021 Poster | Accept (Poster) | The paper provides a novel approach to train a bug detector by co-training a bug injection procedure together with the bug detector. This is an interesting idea, and while the resulting bug detector has a high number of false positives, it was able to find new bugs in PyPI packages. (19 of 1000 reported bugs turned out... | 3 | [{"review_id": "uokunvYoEee", "reviewer": "Reviewer_XqDo", "summary": "Authors propose self-supervised learning approach that trains two models: one to detect and repair bugs in code and another one to generate code with difficult bugs. They use their trained model to improve results on baseline test dataset and to fin... | Self-Supervised Bug Detection and Repair
Miltiadis Allamanis, Henry Jackson-Flux, Marc Brockschmidt Microsoft Research, Cambridge, UK (miallama, mabrocks (oaicckssmmiiiccce (0microoott.com
Abstract
Machine learning-based program analyses have recently shown the promise of integrating formal and probabilistic reasoning ... | 47,489 | |||
zweDnxxWRe | neurips | 2,021 | main | NeurIPS.cc/2021/Conference | 6,809 | Multi-task Learning of Order-Consistent Causal Graphs | We consider the problem of discovering $K$ related Gaussian directed acyclic graphs (DAGs), where the involved graph structures share a consistent causal order and sparse unions of supports. Under the multi-task learning setting, we propose a $l_1/l_2$-regularized maximum likelihood estimator (MLE) for learning $K$ lin... | [
"Xinshi Chen",
"Haoran Sun",
"Caleb Ellington",
"Eric Xing",
"Le Song"
] | [
"Causal discovery",
"DAG estimation",
"multi-task learning",
"graphical models"
] | NeurIPS 2021 Poster | Accept (Poster) | This was a very borderline paper. After the author response and discussion, the one reviewer recommending rejection raised their score to a weak accept. Please make sure to read all of the reviewer feedback carefully and make suggested/promised changes for the camera ready. In particular, reviewer x4HL had some additio... | 3 | [{"review_id": "cncp70Fm6wi", "reviewer": "Reviewer_x4HL", "summary": "The paper considers structure discovery of multiple but related DAGs in the sense that they share a common causal order. Authors propose an l1/l2-regularized MLE for multiple linear Gaussian models and design a continuous optimization method to obta... | Multi-task Learning of Order-Consistent Causal Graphs
Xinshi Chen"
Haoran Sun Georgia Institute Technology
Georgia Institute of Technology xinshi. cwwn@gatech. edu
haoransunggatech.. edu
Caleb Ellington Carnegie Mellon University cellingt@cs, cmu. edu
Eric Xing Carnegie Mellon University
Le Song BioMap MBZUAI
MBZUAI
er... | 43,849 | |||
zkHlu_3sJYU | neurips | 2,021 | main | NeurIPS.cc/2021/Conference | 1,590 | SWAD: Domain Generalization by Seeking Flat Minima | "Domain generalization (DG) methods aim to achieve generalizability to an unseen target domain by us(...TRUNCATED) | ["Junbum Cha","Sanghyuk Chun","Kyungjae Lee","Han-Cheol Cho","Seunghyun Park","Yunsung Lee","Sungrae(...TRUNCATED) | ["domain generalization","robustness","data distribution shift","domain shift","flatness minimizatio(...TRUNCATED) | NeurIPS 2021 Poster | Accept (Poster) | "This paper proposes Stochastic Weight Averaging Densely (SWAD) to improve domain generalization. As(...TRUNCATED) | 4 | "[{\"review_id\": \"OEUIkBI_712\", \"reviewer\": \"Reviewer_1Nc7\", \"summary\": \"This paper extend(...TRUNCATED) | "SWAD: Domain Generalization by Seeking Flat Minima\nJunbum Chalt\nSanghyuk Chun Seunghyun Park\nKyu(...TRUNCATED) | 50,950 | |||
zmVumB1Flg | neurips | 2,021 | main | NeurIPS.cc/2021/Conference | 1,896 | Universal Semi-Supervised Learning | "Universal Semi-Supervised Learning (UniSSL) aims to solve the open-set problem where both the class(...TRUNCATED) | [
"Zhuo Huang",
"Chao Xue",
"Bo Han",
"Jian Yang",
"Chen Gong"
] | [
"Semi-Supervised Learning"
] | NeurIPS 2021 Poster | Accept (Poster) | "This paper studies universal semi-supervised learning problem. In this setting, class distribution (...TRUNCATED) | 4 | "[{\"review_id\": \"gKb7T6FRnDn\", \"reviewer\": \"Reviewer_v3uq\", \"summary\": \"This paper studie(...TRUNCATED) | "Universal Semi-Supervised Learning\nZhuo Huang' 1.2.3 Chao Xue Bo Han\", Jian Yang!?? 1:2 Chen Gong(...TRUNCATED) | 43,410 | |||
zdmF437BCB | neurips | 2,021 | main | NeurIPS.cc/2021/Conference | 8,165 | Domain Adaptation with Invariant Representation Learning: What Transformations to Learn? | "Unsupervised domain adaptation, as a prevalent transfer learning setting, spans many real-world app(...TRUNCATED) | [
"Petar Stojanov",
"Zijian Li",
"Mingming Gong",
"Ruichu Cai",
"Jaime G. Carbonell",
"Kun Zhang"
] | [
"Domain adaptation",
"transfer learning",
"deep learning",
"adversarial training",
"autencoders"
] | NeurIPS 2021 Poster | Accept (Poster) | "The authors study unsupervised domain adaptation. They point out that if the data supports overlap,(...TRUNCATED) | 4 | "[{\"review_id\": \"zneRShUmUtO\", \"reviewer\": \"Reviewer_DkiW\", \"summary\": \"This paper focuse(...TRUNCATED) | "Domain Adaptation with Invariant Representation Learning: What Transformations to Learn?\nPetar Sto(...TRUNCATED) | 46,723 | |||
zwkj1_pxFM | neurips | 2,021 | main | NeurIPS.cc/2021/Conference | 5,161 | A nonparametric method for gradual change problems with statistical guarantees | "We consider the detection and localization of gradual changes in the distribution of a sequence of (...TRUNCATED) | [
"Lizhen Nie",
"Dan L Nicolae"
] | [
"Change point detection",
"gradual change",
"nonparametric"
] | NeurIPS 2021 Poster | Accept (Poster) | "As mentioned by the reviewers the paper tackles an exciting problem. The authors are also very know(...TRUNCATED) | 4 | "[{\"review_id\": \"mkuGwgZD2jQ\", \"reviewer\": \"Reviewer_MgSk\", \"summary\": \"Paper proposed a (...TRUNCATED) | "A nonparametric method for gradual change problems with statistical guarantees\nLizhen Nie Departme(...TRUNCATED) | 46,498 | |||
zdC5eXljMPy | neurips | 2,021 | main | NeurIPS.cc/2021/Conference | 11,385 | Weighted model estimation for offline model-based reinforcement learning | "This paper discusses model estimation in offline model-based reinforcement learning (MBRL), which i(...TRUNCATED) | [
"Toru Hishinuma",
"Kei Senda"
] | [
"model-based reinforcement learning"
] | NeurIPS 2021 Poster | Accept (Poster) | "The reviewers find the problem addressed by the paper important and the proposed idea novel. The mo(...TRUNCATED) | 4 | "[{\"review_id\": \"w3S_H827T0I\", \"reviewer\": \"Reviewer_mD7J\", \"summary\": \"The paper present(...TRUNCATED) | "Weighted model estimation for offline model-based reinforcement learning\nToru Hishinuma\nKei Senda(...TRUNCATED) | 35,522 | |||
zL1szwVKdwc | neurips | 2,021 | main | NeurIPS.cc/2021/Conference | 8,053 | The Elastic Lottery Ticket Hypothesis | "Lottery Ticket Hypothesis (LTH) raises keen attention to identifying sparse trainable subnetworks, (...TRUNCATED) | [
"Xiaohan Chen",
"Yu Cheng",
"Shuohang Wang",
"Zhe Gan",
"Jingjing Liu",
"Zhangyang Wang"
] | [
"Lottery Ticket Hypothesis"
] | NeurIPS 2021 Poster | Accept (Poster) | "This paper investigates whether lottery tickets can transfer across architectures of different dept(...TRUNCATED) | 3 | "[{\"review_id\": \"z-xAF6o__OV\", \"reviewer\": \"Reviewer_r4hi\", \"summary\": \"This paper focuse(...TRUNCATED) | "The Elastic Lottery Ticket Hypothesis\nXiaohan Chen'* I+ Yu Cheng²\nShuohang Wang Zhangyang Wang'\(...TRUNCATED) | 50,040 | |||
zqo2sqixxbE | neurips | 2,021 | main | NeurIPS.cc/2021/Conference | 7,462 | Asymptotically Best Causal Effect Identification with Multi-Armed Bandits | "This paper considers the problem of selecting a formula for identifying a causal quantity of intere(...TRUNCATED) | [
"Alan Malek",
"Silvia Chiappa"
] | ["causal identification formulas","frontdoor criterion","adjustment criterion","causal inference","m(...TRUNCATED) | NeurIPS 2021 Poster | Accept (Poster) | "This paper used best-arm identifications to select the best estimator for identifying a causal effe(...TRUNCATED) | 3 | "[{\"review_id\": \"nls2Ktw69Qo\", \"reviewer\": \"Reviewer_QgMM\", \"summary\": \"This paper aims t(...TRUNCATED) | "Asymptotically Best Causal Effect Identification with Multi-Armed Bandits\nAlan Malek\nSilvia Chiap(...TRUNCATED) | 40,449 |
ReviewBench
A large, multi-conference corpus of peer-reviewed papers + their reviews + author rebuttals + acceptance decisions, harvested from OpenReview and aligned with OCR'd full-text markdown of every paper.
- 51,529 papers
- 196,099 reviews
- 558,785 OCR'd PDF pages (markdown inlined per row)
- 7 conferences, 22 venue/year combinations, 2020 – 2026
from datasets import load_dataset
ds = load_dataset("/reviewbench")
print(ds)
# DatasetDict({
# neurips: Dataset(num_rows=...)
# iclr: Dataset(num_rows=...)
# icml: Dataset(num_rows=...)
# tmlr: Dataset(num_rows=...)
# emnlp: Dataset(num_rows=...)
# corl: Dataset(num_rows=...)
# colm: Dataset(num_rows=...)
# })
Coverage
One split per conference family. Within a split, filter by year, venue_id, or track for slicing.
| Split | Venues / years | Tracks | ≈ Papers |
|---|---|---|---|
| neurips | 2021, 2022, 2023, 2023 D&B, 2024, 2025 | main + Datasets & Benchmarks (2023) | ~18,400 |
| iclr | 2020, 2021, 2022, 2023, 2024, 2025, 2026 | main | ~22,500 |
| icml | 2025 | main | ~3,300 |
| tmlr | rolling (all accepted papers as of April 2026) | main | ~3,750 |
| emnlp | 2023 | main + Findings | ~2,000 |
| corl | 2021, 2022, 2023, 2024 | main | ~820 |
| colm | 2024, 2025 | main | ~720 |
NeurIPS 2020 and earlier used CMT and have no public OpenReview reviews.
Schema
Each row is one paper.
| Column | Type | Notes |
|---|---|---|
forum_id |
string |
OpenReview forum ID — primary key |
conference |
string |
neurips / iclr / icml / tmlr / emnlp / corl / colm |
year |
int32 |
Conference year |
track |
string |
main / datasets_and_benchmarks / findings / etc. |
venue_id |
string |
OpenReview venue ID, e.g. NeurIPS.cc/2024/Conference |
paper_number |
int32 |
Submission number (nullable) |
title |
string |
|
abstract |
string |
|
authors |
list<string> |
|
keywords |
list<string> |
|
tldr |
string |
|
primary_area |
string |
|
venue |
string |
Final venue string, e.g. "NeurIPS 2024 poster" |
decision |
string |
Accept (poster), Accept (oral), Reject, etc. |
decision_comment |
string |
Area-chair meta-review |
author_rebuttal |
string |
General rebuttal (≤2024); empty when per-reviewer rebuttals are used |
num_reviews |
int32 |
Convenience count |
reviews_json |
string |
All reviews as a JSON string — see schema below |
markdown |
string |
OCR'd full text of the PDF (see OCR below); "" if PDF unavailable |
markdown_chars |
int64 |
len(markdown) for fast filtering |
Decoding reviews_json
reviews_json is json.dumps(list[dict]). Decode with:
import json
df = ds["neurips"].to_pandas()
df["reviews"] = df["reviews_json"].map(json.loads)
print(df.iloc[0]["reviews"][0].keys())
Each review dict is a union over all forms used by all venues across all years, with absent fields as empty strings / None. The most reliably populated fields are:
| Field | Where populated |
|---|---|
review_id, reviewer, rating, confidence, rebuttal |
All venues |
summary, questions, limitations, strengths, weaknesses |
NeurIPS 2022–24, ICLR, ICML, CoRL, COLM |
soundness, presentation, contribution |
NeurIPS 2022–24, ICLR, ICML |
quality, clarity, significance, originality |
NeurIPS 2025, TMLR |
strengths_and_weaknesses |
NeurIPS 2025 (merged form) |
was_revised, final_justification |
NeurIPS 2025 (in-place review revisions) |
claims_and_evidence, theoretical_claims, experimental_designs_or_analyses, relation_to_broader_scientific_literature, essential_references_not_discussed |
TMLR |
paper_topic_and_main_contributions, reasons_to_accept, reasons_to_reject, excitement, reproducibility, ethical_concerns |
EMNLP 2023 |
summary_of_paper, summary_of_recommendation, technical_quality, clarity_of_presentation, potential_impact, robotics_focus |
CoRL |
extra_scores, extra_text |
dicts capturing any venue-specific fields not in the union |
Schema differences in detail:
- NeurIPS 2021 D&B (track): older form, most numeric sub-scores absent; lives in the
neuripssplit. - NeurIPS 2022–2024:
soundness/presentation/contribution; separatestrengths/weaknesses; single generalauthor_rebuttal. - NeurIPS 2025:
quality/clarity/significance/originality; mergedstrengths_and_weaknesses; per-reviewer rebuttals; in-place review revisions tracked viawas_revised+final_justification. - ICLR 2020–2026: NeurIPS-style
soundness/presentation/contribution; per-reviewer rebuttals. - ICML 2025: similar to NeurIPS-style; some venue-specific fields (e.g.
technical_quality,novelty) live inextra_scores. - TMLR: claim-evidence-style structured review; rolling acceptance.
- EMNLP 2023: ARR-style review form with
reasons_to_accept/reasons_to_reject/excitement/reproducibility. - CoRL: robotics-focused review form.
- COLM: language-modeling-focused review form.
Source and collection
- Source: OpenReview — main and any track-level conferences (e.g. NeurIPS Datasets & Benchmarks).
- Collected: April 2026 via the OpenReview Python API (a mix of
openreview.apiv2 and legacy v1 for older NeurIPS/ICLR years). - Scraping pipeline: parallel Modal workers (100 containers, single-token reuse to bypass the 3-req/min rate limit). Each forum was fetched with all official reviews, official comments, decision, and author rebuttals; PDFs were downloaded to a Modal volume.
Markdown / OCR
- Engine:
nvidia/nemotron-ocr-v2 - Compute: 10× NVIDIA L40S GPUs in parallel on Modal (~6 h wall-clock end-to-end)
- Throughput: ~7,200 PDFs/hour aggregate; mean 7.3 s/PDF per worker; 558,785 pages processed
- Failures: 1 PDF errored out during OCR; ~76 PDFs were unavailable from OpenReview at scrape time and have
markdown == "". Usemarkdown_chars > 0to filter.
OCR quality (spot-checked across NeurIPS, ICLR, ICML, TMLR, CoRL, COLM)
What works well:
- Body prose, abstracts, section headers, paragraph structure
- In-line citations like
(Author et al., YEAR)mostly preserved - Equations rendered linearly (variable names + structure visible)
- Page boundaries marked with
\n\n---\n\nseparator
Recurring artifacts to be aware of (consistent across venues, low impact for most NLP tasks):
- Email addresses and URLs containing repeated chars (e.g.
name@@@cmu.eed,https:////aaaaaaa) - Occasional word-doubling at line breaks (
decision-decision-making,Complex-Valuee Valued) - Citation lists missing semicolons (
(Singer 2007 Uhlhaas et al. 2009)) - Greek letters and super/sub-scripts often dropped or flattened
- First-page logo/header text occasionally bleeds into title (
git Cooperative …) - Figure caption tokens interleave with body text on figure-heavy pages
Practical impact: fine for dense retrieval, language modeling, review-grounding, summarization. Not suitable for tasks requiring exact equations or canonical citation strings.
Suggested uses
- Train review-quality classifiers / score predictors
- Study reviewer agreement, rebuttal effectiveness, decision dynamics
- Build retrieval-augmented or grounded scientific-paper assistants
- Meta-research on the peer-review process across venues and years
- Few-shot / RAG benchmarks that require the paper full text + the reviews
Related work
This dataset was assembled in support of an ICML 2026 Datasets and Benchmarks-track submission introducing ReviewBench. Citation will be updated upon publication.
License
Released under CC BY 4.0. Paper full text and review text remain the intellectual property of their respective authors; this dataset redistributes them for non-commercial research consistent with OpenReview's public-access policy. If you are an author and would like content removed, please open an issue on the dataset repository.
Acknowledgements
- The OpenReview team for keeping the peer-review record open.
- NVIDIA for releasing nemotron-ocr-v2.
- Modal for the GPU and storage infrastructure used to assemble this corpus.
- Downloads last month
- 24