sadjadeb commited on
Commit
c2f9640
·
verified ·
1 Parent(s): a87507a

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +132 -3
README.md CHANGED
@@ -1,3 +1,132 @@
1
- ---
2
- license: cc-by-4.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: cc-by-4.0
3
+ task_categories:
4
+ - feature-extraction
5
+ language:
6
+ - en
7
+ tags:
8
+ - review_quality_assesment
9
+ - peer_review
10
+ - llm_based_evaluation
11
+ pretty_name: RottenReviews
12
+ size_categories:
13
+ - 10K<n<100K
14
+ configs:
15
+ - config_name: RottenReviews
16
+ data_files:
17
+ - split: ICLR2024
18
+ path:
19
+ - raw/iclr2024_submissions.jsonl
20
+ - split: NIPS2023
21
+ path:
22
+ - raw/neurips2023_submissions.jsonl
23
+ - split: F1000 Journal
24
+ path:
25
+ - raw/f1000research_submissions.jsonl
26
+ - split: Semantic Web Journal
27
+ path:
28
+ - raw/semantic-web-journal_submissions.jsonl
29
+ - split: Human Annotation Data
30
+ path:
31
+ - human_annotation_data.jsonl
32
+ ---
33
+
34
+ # RottenReviews: Benchmarking Review Quality with Human and LLM-Based Judgments
35
+
36
+ Quick links: 📃 [Paper](https://reviewer.ly/wp-content/themes/reviewerly-vite-theme/dist/rottenreviews.pdf) | ⚙️ [Code](https://github.com/Reviewerly-Inc/RottenReviews)
37
+
38
+
39
+
40
+ **RottenReviews** is a benchmark dataset designed to facilitate research on **peer review quality assessment** using multiple types of evaluation signals, including human expert annotations, structured metrics derived from textual features, and large language model (LLM)-based judgments.
41
+
42
+ ## 🧠 Dataset Summary
43
+
44
+ Peer review quality is central to the scientific publishing process, but systematic evaluation at scale is challenging. The **RottenReviews** dataset addresses this gap by providing a large corpus of academic peer reviews enriched with reviewer metadata and multiple quality indicators:
45
+
46
+ * **Raw peer reviews** from multiple academic venues (e.g., F1000Research, Semantic Web Journal, ICLR, NeurIPS) spanning diverse research areas
47
+ * **Reviewer profiles** (when available) linked via external scholarly metadata
48
+ * **Quantifiable metrics** capturing interpretable aspects of review text and reviewer behavior (e.g., lexical diversity, topical alignment, hedging)
49
+ * **Human expert annotations** over a subset of reviews across multiple quality dimensions (e.g., clarity, fairness, comprehensiveness)
50
+ * **LLM-based judgments** generated using structured prompts for automated quality assessment
51
+
52
+ The dataset was introduced to support research on benchmarking and modeling peer review quality at scale. It contains thousands of submissions and reviewer profiles, making it one of the most comprehensive resources for peer review quality analysis.
53
+
54
+
55
+ ## 📂 Dataset Structure
56
+
57
+ The dataset is organized into multiple components reflecting different stages of processing and annotation:
58
+
59
+ | **Folder / File** | **Description** | **Format** |
60
+ | ------------------------------- | -------------------------------------------------------- | --------------- |
61
+ | `raw/` | Raw extracted submissions and reviews from source venues | JSON / PKL |
62
+ | `processed/` | Cleaned and structured review records | CSV / JSON |
63
+ | `human_annotation/` | Subset of reviews annotated by human experts | CSV / JSON |
64
+ | `feature_extraction/` | Scripts and outputs for computing quantifiable metrics | Notebooks / CSV |
65
+ | `predict_review_quality_score/` | Inputs and outputs for quality prediction models | CSV / JSON |
66
+
67
+ Due to size constraints, the full dataset is not hosted directly in the repository. Instructions for downloading the data are provided in the project README.
68
+
69
+
70
+ ## 📊 Data Fields
71
+
72
+ ### Review Record (example fields)
73
+
74
+ * `id`: Unique identifier for the submission or review item
75
+ * `date`: Submission or review date
76
+ * `type`: Item type (e.g., Full Paper)
77
+ * `title`: Paper title
78
+ * `abstract`: Paper abstract
79
+ * `reviews`: A list of review objects, each containing:
80
+
81
+ * `reviewer`: Anonymized reviewer identifier
82
+ * `date`: Review submission date
83
+ * `suggestion`: Reviewer recommendation (e.g., accept, reject)
84
+ * `comment`: Free-text review content
85
+
86
+
87
+ ## 📌 Usage Example
88
+
89
+ ```python
90
+ from datasets import load_dataset
91
+
92
+ dataset = load_dataset("Reviewerly/RottenReviews")
93
+
94
+ # Access processed reviews
95
+ processed_reviews = dataset["processed"]
96
+ print(processed_reviews[0])
97
+
98
+ # Access human annotations
99
+ human_data = dataset["human_annotation"]
100
+ print(human_data[0])
101
+ ```
102
+
103
+
104
+ ## 🎯 Tasks & Applications
105
+
106
+ RottenReviews supports a wide range of research tasks, including:
107
+
108
+ * **Peer Review Quality Prediction**
109
+ * **Benchmarking LLM-Based Review Evaluation Methods**
110
+ * **Correlation Analysis Between Metrics and Human Judgments**
111
+ * **Reviewer Behavior and Metadata Modeling**
112
+ * **Interpretability Studies for Review Quality Signals**
113
+
114
+
115
+ ## 🧾 License & Citation
116
+
117
+ The dataset and accompanying code are released under the license specified in the RottenReviews repository.
118
+ If you use this dataset in academic work, please cite the accompanying RottenReviews paper.
119
+
120
+ ```bibtex
121
+ @inproceedings{ebrahimi2025rottenreviews,
122
+ title={RottenReviews: Benchmarking Review Quality with Human and LLM-Based Judgments},
123
+ author={Ebrahimi, Sajad and Sadeghian, Soroush and Ghorbanpour, Ali and Arabzadeh, Negar and Salamat, Sara and Li, Muhan and Le, Hai Son and Bashari, Mahdi and Bagheri, Ebrahim},
124
+ booktitle={Proceedings of the 34th ACM International Conference on Information and Knowledge Management},
125
+ series = {CIKM '25},
126
+ pages={5642--5649},
127
+ year={2025},
128
+ url = {https://doi.org/10.1145/3746252.3761506},
129
+ doi = {10.1145/3746252.3761506}
130
+ }
131
+ ```
132
+