|
|
--- |
|
|
license: cc-by-4.0 |
|
|
task_categories: |
|
|
- feature-extraction |
|
|
language: |
|
|
- en |
|
|
tags: |
|
|
- review_quality_assesment |
|
|
- peer_review |
|
|
- llm_based_evaluation |
|
|
pretty_name: RottenReviews |
|
|
size_categories: |
|
|
- 10K<n<100K |
|
|
configs: |
|
|
- config_name: ICLR2024 |
|
|
data_files: |
|
|
- split: data |
|
|
path: |
|
|
- raw/iclr2024_submissions.jsonl |
|
|
- config_name: NIPS2023 |
|
|
data_files: |
|
|
- split: data |
|
|
path: |
|
|
- raw/neurips2023_submissions.jsonl |
|
|
- config_name: F1000Journal |
|
|
data_files: |
|
|
- split: data |
|
|
path: |
|
|
- raw/f1000research_submissions.jsonl |
|
|
- config_name: SemanticWebJournal |
|
|
data_files: |
|
|
- split: data |
|
|
path: |
|
|
- raw/semantic-web-journal_submissions.jsonl |
|
|
- config_name: human_annotated_data |
|
|
data_files: |
|
|
- split: data |
|
|
path: |
|
|
- human_annotation_data.jsonl |
|
|
--- |
|
|
|
|
|
# RottenReviews: Benchmarking Review Quality with Human and LLM-Based Judgments |
|
|
|
|
|
Quick links: ๐ [Paper](https://reviewer.ly/wp-content/themes/reviewerly-vite-theme/dist/rottenreviews.pdf) | โ๏ธ [Code](https://github.com/Reviewerly-Inc/RottenReviews) |
|
|
|
|
|
|
|
|
|
|
|
**RottenReviews** is a benchmark dataset designed to facilitate research on **peer review quality assessment** using multiple types of evaluation signals, including human expert annotations, structured metrics derived from textual features, and large language model (LLM)-based judgments. |
|
|
|
|
|
Note: This HF repo only contains the raw files and the human annotation data records. Some dataset components are available only in our Google Drive. Follow repository documentation for downloading the processed files. |
|
|
|
|
|
## ๐ง Dataset Summary |
|
|
|
|
|
Peer review quality is central to the scientific publishing process, but systematic evaluation at scale is challenging. The **RottenReviews** dataset addresses this gap by providing a large corpus of academic peer reviews enriched with reviewer metadata and multiple quality indicators: |
|
|
|
|
|
* **Raw peer reviews** from multiple academic venues (e.g., F1000Research, Semantic Web Journal, ICLR, NeurIPS) spanning diverse research areas |
|
|
* **Reviewer profiles** (when available) linked via external scholarly metadata |
|
|
* **Quantifiable metrics** capturing interpretable aspects of review text and reviewer behavior (e.g., lexical diversity, topical alignment, hedging) |
|
|
* **Human expert annotations** over a subset of reviews across multiple quality dimensions (e.g., clarity, fairness, comprehensiveness) |
|
|
* **LLM-based judgments** generated using structured prompts for automated quality assessment |
|
|
|
|
|
The dataset was introduced to support research on benchmarking and modeling peer review quality at scale. It contains thousands of submissions and reviewer profiles, making it one of the most comprehensive resources for peer review quality analysis. |
|
|
|
|
|
|
|
|
## ๐ Usage Example |
|
|
|
|
|
```python |
|
|
from datasets import load_dataset |
|
|
|
|
|
dataset = load_dataset("Reviewerly/RottenReviews", "ICLR2024") # Select partiotion from ['ICLR2024', 'NIPS2023', 'F1000Journal', 'SemanticWebJournal', 'human_annotated_data'] |
|
|
|
|
|
# Access processed reviews |
|
|
processed_reviews = dataset["data"] |
|
|
print(processed_reviews[0]) |
|
|
``` |
|
|
|
|
|
|
|
|
## ๐ฏ Tasks & Applications |
|
|
|
|
|
RottenReviews supports a wide range of research tasks, including: |
|
|
|
|
|
* **Peer Review Quality Prediction** |
|
|
* **Benchmarking LLM-Based Review Evaluation Methods** |
|
|
* **Correlation Analysis Between Metrics and Human Judgments** |
|
|
* **Reviewer Behavior and Metadata Modeling** |
|
|
* **Interpretability Studies for Review Quality Signals** |
|
|
|
|
|
|
|
|
## ๐งพ License & Citation |
|
|
|
|
|
The dataset and accompanying code are released under the license specified in the RottenReviews repository. |
|
|
If you use this dataset in academic work, please cite the accompanying RottenReviews paper. |
|
|
|
|
|
```bibtex |
|
|
@inproceedings{ebrahimi2025rottenreviews, |
|
|
title={RottenReviews: Benchmarking Review Quality with Human and LLM-Based Judgments}, |
|
|
author={Ebrahimi, Sajad and Sadeghian, Soroush and Ghorbanpour, Ali and Arabzadeh, Negar and Salamat, Sara and Li, Muhan and Le, Hai Son and Bashari, Mahdi and Bagheri, Ebrahim}, |
|
|
booktitle={Proceedings of the 34th ACM International Conference on Information and Knowledge Management}, |
|
|
series = {CIKM '25}, |
|
|
pages={5642--5649}, |
|
|
year={2025}, |
|
|
url = {https://doi.org/10.1145/3746252.3761506}, |
|
|
doi = {10.1145/3746252.3761506} |
|
|
} |
|
|
``` |
|
|
|
|
|
|