Datasets:
The dataset viewer is not available for this split.
Error code: TooBigContentError
Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
ICLR Reviews Dataset (2020-2026)
Peer reviews, meta-reviews, and paper content from ICLR conferences (2020-2026).
Dataset Description
This dataset contains complete papers from ICLR 2020-2026 with:
- Submission metadata (title, abstract, authors, decision)
- Up to 12 reviewer assessments with response counts
- Area Chair meta-reviews
- Raw and cleaned markdown paper content
- Section-level breakdown
Filtering Criteria
Complete papers must have:
- PDF available on OpenReview
- Markdown conversion successful
- Both Abstract and References sections detected
- Not Withdrawn or Desk Rejected
- Has a final decision
Abstract Normalization
The abstract field is normalized for anonymization:
- Sentences containing URLs are removed entirely (not just the URL)
- This catches GitHub links, project pages, code repositories, etc.
- Original abstract preserved in
_original_abstractfield
This enables blind review analysis without leaking author identity through URLs.
Coverage Statistics
| Year | Reviewable | Complete | % | Reject | Poster | Spotlight | Oral |
|---|---|---|---|---|---|---|---|
| 2020 | 2,213 | 2,190 | 99.0% | 1,526 | 531 | 108 | 48 |
| 2021 | 2,594 | 2,562 | 98.8% | 1,735 | 692 | 114 | 53 |
| 2022 | 2,617 | 2,543 | 97.2% | 1,523 | 865 | 174 | 55 |
| 2023 | 3,792 | 3,612 | 95.3% | 2,219 | 1,202 | 280 | 91 |
| 2024 | 5,780 | 5,498 | 95.1% | 3,519 | 1,808 | 367 | 86 |
| 2025 | 8,727 | 7,956 | 91.2% | 5,019 | 3,115 | 380 | 213 |
| 2026 | 15,948 | 9,189 | 57.6% | 0 | 0 | 0 | 0 |
| Total | 41,671 | 33,550 | 80.5% | 15,541 | 8,213 | 1,423 | 546 |
Notes:
- Reviewable = Total submissions minus Withdrawn and Desk Rejected
- Complete = Has MD file with detected Abstract and References sections
- Decision counts are for reviewable papers only
Schema
Each row contains:
| Field | Type | Description |
|---|---|---|
submission |
dict | Paper metadata (id, title, abstract, decision, authors, etc.) |
review_1 to review_12 |
dict/None | Reviewer assessments (None if fewer reviews) |
meta_review |
dict/None | Area Chair assessment |
raw_md |
str | Raw markdown from PDF conversion |
clean_md |
str | Cleaned markdown (Introduction → References) |
clean_md_sections |
dict | Mapping of section titles to content |
md_path |
str | Local path to markdown file |
pdf_path |
str/None | Local path to PDF file (None if not found) |
Submission Fields
{
"id": "abc123xyz", # OpenReview ID
"title": "Paper Title",
"abstract": "Abstract text...",
"decision": "Accept (Poster)",
"authors": ["Author One", "Author Two"],
"keywords": ["machine learning", "nlp"],
"venue": "ICLR 2024 poster",
"pdf_url": "/pdf/abc123xyz.pdf",
"created_date": 1699574400, # Unix timestamp
"modified_date": 1699574400,
"tldr": "Short summary...",
"primary_area": "machine learning",
"google_scholar_citations": 42, # Google Scholar citation count (None if not found)
}
Review Fields
Review schemas vary by year due to OpenReview changes:
| Year | Rating Field | Key Content Fields |
|---|---|---|
| 2020 | rating (str) |
review, experience_assessment, review_assessment_* |
| 2021 | rating (str) |
review, confidence |
| 2022 | recommendation (str) |
main_review, summary_of_the_paper, correctness, novelty |
| 2023 | recommendation (str) |
strength_and_weaknesses, summary_of_the_paper, correctness, novelty |
| 2024-2026 | rating (int) |
summary, strengths, weaknesses, soundness, presentation, contribution |
All reviews include:
number_of_author_responses: Author replies to this reviewnumber_of_reviewer_responses_to_author: Reviewer follow-ups
Rating Scales
| Year | Rating Scale | Confidence | Sub-scores |
|---|---|---|---|
| 2020 | 1, 3, 6, 8 (str) | N/A | N/A |
| 2021 | 1-10 (str) | 1-5 (str) | N/A |
| 2022-2023 | 1-10 (str) | 1-5 (str) | correctness, novelty (str 1-4) |
| 2024-2026 | 1-10 (int) | 1-5 (int) | soundness, presentation, contribution (int 1-4) |
Markdown Normalization
The clean_md field contains normalized markdown produced from the raw PDF conversion. The normalization pipeline:
1. Validation
- Paper must have both Abstract and References sections detected
- Papers failing validation are excluded from the dataset
2. Content Clipping
- Start: First section header AFTER Abstract (typically "Introduction")
- End: End of References section (before Appendix/Supplementary)
- This removes: title, authors, abstract, appendices, supplementary material
3. Section Removal
- Acknowledgements: Removed entirely (to preserve anonymity for blind review analysis)
- Reproducibility: Removed entirely (often contains author-identifying information)
4. Artifact Cleaning
- Line numbers: Removed (e.g.,
**054 055 056**remnants from submitted PDFs) - Standalone number lines: Removed (bare PDF line numbers like
327,337 338) - Page anchors: Lines containing
<span id="page-X">...<sup>removed entirely - Code/GitHub refs: Entire sentences containing
code...https://github...removed (author code) - Footnotes: Removed except those referencing figures
- Dagger markers: Removed (†, ‡) except figure references
5. Header Normalization
- All headers normalized to single
#level - Titles converted to UPPERCASE
- Span tags and bold markers removed
- Example:
## 3.1 **<span>Methods</span>**→# 3.1 METHODS
6. Whitespace Normalization
- Multiple blank lines collapsed to single blank line
- Trailing whitespace stripped
Section Breakdown
The clean_md_sections field provides a dict mapping normalized section titles to content:
{
"INTRODUCTION": "Section content...",
"RELATED WORK": "Section content...",
"METHODS": "Section content...",
"EXPERIMENTS": "Section content...",
"CONCLUSION": "Section content...",
"REFERENCES": "Reference list..."
}
Note: Section titles vary by paper. Common sections include INTRODUCTION, RELATED WORK, METHOD/METHODS, EXPERIMENTS, RESULTS, DISCUSSION, CONCLUSION, REFERENCES.
Usage
from datasets import load_dataset
# Load a specific year (as a config/subset)
ds = load_dataset("skonan/iclr-data-2020-2026", "2024")
# Load default (most recent year)
ds = load_dataset("skonan/iclr-data-2020-2026")
# Access data
for row in ds["train"]:
print(row["submission"]["title"])
print(row["submission"]["decision"])
# Access reviews
if row["review_1"]:
print(row["review_1"]["rating"])
# Access sections
intro = row["clean_md_sections"].get("INTRODUCTION", "")
print(intro[:500])
Data Source
Data extracted from OpenReview using the OpenReview API. Paper PDFs converted to markdown using Marker.
License
Apache 2.0
Citation
If you use this dataset, please cite:
@misc{iclr-data-2020-2026,
title={ICLR Reviews Dataset 2020-2026},
author={OpenReview Community},
year={2024},
howpublished={HuggingFace Datasets},
url={https://huggingface.co/datasets/skonan/iclr-data-2020-2026}
}
- Downloads last month
- 82