annotations_creators:
- machine-generated
language:
- en
language_creators:
- found
multilinguality:
- monolingual
pretty_name: ICLR Papers with Reviews
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- text-classification
- text-generation
- question-answering
- summarization
paperswithcode_id: null
tags:
- academic-papers
- peer-review
- machine-learning
- iclr
- openreview
Dataset Card for ICLR Papers with Reviews (2023-2025)
Dataset Description
This dataset contains paper submissions and review data from the International Conference on Learning Representations (ICLR) for the years 2023, 2024, and 2025. The data is sourced from OpenReview, an open peer review platform that hosts the review process for top ML conferences.
Focus on Review Data
This dataset emphasizes the peer review ecosystem surrounding academic papers. Each record includes comprehensive review-related information:
- Related Notes (
related_notes): Contains review discussions, meta-reviews, author responses, and community feedback from the OpenReview platform - Full Paper Content: Complete paper text in Markdown format, enabling analysis of the relationship between paper content and review outcomes
- Review Metadata: Structured metadata including page statistics, table of contents, and document structure analysis
The review data captures the full peer review workflow:
- Initial submission reviews from multiple reviewers
- Author rebuttal and response rounds
- Meta-reviews from area chairs
- Final decision notifications (Accept/Reject)
- Post-publication discussions and community comments
This makes the dataset particularly valuable for:
- Review Quality Analysis: Studying patterns in peer review quality and consistency
- Decision Prediction: Building models to predict acceptance decisions based on paper content and reviews
- Review Generation: Training models to generate constructive paper reviews
- Bias Detection: Analyzing potential biases in the peer review process
- Scientific Discourse Analysis: Understanding how scientific consensus forms through discussion
Dataset Statistics
- Total Papers: 8,310
- Year Coverage: 2023-2025
- Source: OpenReview platform
- Conference: ICLR (International Conference on Learning Representations)
- Content: Full paper text + complete review discussions
Dataset Structure
Data Instances
Each instance represents a paper with its associated review data:
{
"id": "RUzSobdYy0V",
"title": "Quantifying and Mitigating the Impact of Label Errors on Model Disparity Metrics",
"authors": "Julius Adebayo, Melissa Hall, Bowen Yu, Bobbie Chern",
"abstract": "Errors in labels obtained via human annotation adversely affect...",
"year": "2023",
"conference": "ICLR",
"related_notes": "[Review discussions, meta-reviews, and author responses]",
"pdf_url": "https://openreview.net/pdf?id=RUzSobdYy0V",
"source_url": "https://openreview.net/forum?id=RUzSobdYy0V",
"content": "[Full paper text in Markdown format]",
"content_meta": "[JSON metadata with TOC and page statistics]"
}
Data Fields
| Field | Type | Description |
|---|---|---|
id |
string | Unique OpenReview paper ID |
title |
string | Paper title |
authors |
string | Author names (comma-separated) |
abstract |
string | Paper abstract |
year |
string | Publication year (2023-2025) |
conference |
string | Conference name (ICLR) |
related_notes |
string | Review data - includes reviews, meta-reviews, discussions |
pdf_url |
string | Link to PDF on OpenReview |
source_url |
string | Link to paper forum on OpenReview |
content |
string | Full paper content in Markdown |
content_meta |
string | JSON metadata (TOC, page stats, structure) |
Review Data Structure
The related_notes field contains the complete review history from OpenReview, including:
- Primary Reviews: Detailed reviews from 3-4 reviewers per paper
- Reviewer Ratings: Numerical scores and confidence levels
- Author Responses: Rebuttals and clarifications from authors
- Meta-Reviews: Summary and recommendations from area chairs
- Final Decisions: Accept/reject decisions with rationale
- Post-Decision Discussions: Community comments and feedback
Data Splits
The dataset does not have predefined splits. Users should create their own train/validation/test splits based on their specific use case.
Dataset Creation
Curation Rationale
This dataset was created to enable research on understanding and improving the peer review process in machine learning conferences. By combining full paper content with complete review discussions, researchers can:
- Analyze how paper characteristics relate to review outcomes
- Study the language and patterns in constructive reviews
- Build systems to assist reviewers or authors
- Investigate fairness and bias in peer review
Source Data
The data was collected from the OpenReview platform, which hosts the ICLR review process in an open format. All reviews, discussions, and decisions are publicly available on the OpenReview website.
Data Processing
- Paper Content Extraction: Full papers were converted to Markdown format from PDF sources
- Review Aggregation: Review discussions were extracted from OpenReview forums
- Quality Filtering: Records with missing essential fields (ID, content, or related_notes) were removed
- Metadata Extraction: Structural metadata (TOC, page statistics) was extracted from papers
Considerations for Using the Data
Social Impact of the Dataset
This dataset provides transparency into the peer review process, which is typically opaque. By making reviews and discussions publicly available, it enables:
- Analysis of review quality and consistency
- Identification of potential biases in evaluation
- Development of tools to assist the review process
- Educational resources for understanding peer review
However, users should be aware that:
- Reviews represent subjective opinions of reviewers
- Reviewer identities are not included to protect privacy
- Reviews should be interpreted within the context of the specific conference and time period
Discussion of Biases
The dataset may contain several biases:
- Reviewer Bias: Different reviewers may have different standards and tendencies
- Conference-Specific Norms: ICLR review norms may differ from other venues
- Temporal Shifts: Review criteria may have evolved across 2023-2025
- Selection Bias: Papers in this dataset represent ICLR submissions, which may not generalize to all ML research
Other Known Limitations
- Reviewer identities are anonymized to protect privacy
- Some papers may have incomplete review histories (e.g., withdrawn submissions)
- The
related_notesfield contains unstructured text that may require parsing for specific analyses
Additional Information
Dataset Curators
This dataset was compiled from publicly available OpenReview data.
Licensing Information
The papers and reviews in this dataset are subject to the copyright and terms of use of the OpenReview platform and the respective authors.
Citation Information
If you use this dataset, please cite:
@dataset{iclr_papers_with_reviews,
title = {ICLR Papers with Reviews (2023-2025)},
author = {Dataset Curator},
year = {2025},
note = {Compiled from OpenReview platform data}
}
Contributions
This dataset was created by extracting and aggregating publicly available data from the OpenReview platform for research purposes.
Usage Examples
Loading the Dataset
import json
# Load from JSONL
with open('ICLR_merged_cleaned_huggingface.jsonl', 'r', encoding='utf-8') as f:
for line in f:
paper = json.loads(line)
print(f"Title: {paper['title']}")
print(f"Year: {paper['year']}")
print(f"Review Data: {paper['related_notes'][:200]}...")
break
Analyzing Review Content
# Extract reviews for analysis
def extract_reviews(paper):
"""Parse review-related information from related_notes field"""
notes = paper['related_notes']
# Parse review discussions, ratings, and decisions
# Implementation depends on specific format
return {
'paper_id': paper['id'],
'title': paper['title'],
'reviews': reviews,
'decision': decision
}
Acknowledgments
This dataset would not be possible without the open peer review platform provided by OpenReview and the contributions of the ICLR community.