sadjadeb commited on
Commit
343f11a
ยท
verified ยท
1 Parent(s): c2f9640

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -46
README.md CHANGED
@@ -14,19 +14,7 @@ size_categories:
14
  configs:
15
  - config_name: RottenReviews
16
  data_files:
17
- - split: ICLR2024
18
- path:
19
- - raw/iclr2024_submissions.jsonl
20
- - split: NIPS2023
21
- path:
22
- - raw/neurips2023_submissions.jsonl
23
- - split: F1000 Journal
24
- path:
25
- - raw/f1000research_submissions.jsonl
26
- - split: Semantic Web Journal
27
- path:
28
- - raw/semantic-web-journal_submissions.jsonl
29
- - split: Human Annotation Data
30
  path:
31
  - human_annotation_data.jsonl
32
  ---
@@ -39,6 +27,8 @@ Quick links: ๐Ÿ“ƒ [Paper](https://reviewer.ly/wp-content/themes/reviewerly-vite-
39
 
40
  **RottenReviews** is a benchmark dataset designed to facilitate research on **peer review quality assessment** using multiple types of evaluation signals, including human expert annotations, structured metrics derived from textual features, and large language model (LLM)-based judgments.
41
 
 
 
42
  ## ๐Ÿง  Dataset Summary
43
 
44
  Peer review quality is central to the scientific publishing process, but systematic evaluation at scale is challenging. The **RottenReviews** dataset addresses this gap by providing a large corpus of academic peer reviews enriched with reviewer metadata and multiple quality indicators:
@@ -52,38 +42,6 @@ Peer review quality is central to the scientific publishing process, but systema
52
  The dataset was introduced to support research on benchmarking and modeling peer review quality at scale. It contains thousands of submissions and reviewer profiles, making it one of the most comprehensive resources for peer review quality analysis.
53
 
54
 
55
- ## ๐Ÿ“‚ Dataset Structure
56
-
57
- The dataset is organized into multiple components reflecting different stages of processing and annotation:
58
-
59
- | **Folder / File** | **Description** | **Format** |
60
- | ------------------------------- | -------------------------------------------------------- | --------------- |
61
- | `raw/` | Raw extracted submissions and reviews from source venues | JSON / PKL |
62
- | `processed/` | Cleaned and structured review records | CSV / JSON |
63
- | `human_annotation/` | Subset of reviews annotated by human experts | CSV / JSON |
64
- | `feature_extraction/` | Scripts and outputs for computing quantifiable metrics | Notebooks / CSV |
65
- | `predict_review_quality_score/` | Inputs and outputs for quality prediction models | CSV / JSON |
66
-
67
- Due to size constraints, the full dataset is not hosted directly in the repository. Instructions for downloading the data are provided in the project README.
68
-
69
-
70
- ## ๐Ÿ“Š Data Fields
71
-
72
- ### Review Record (example fields)
73
-
74
- * `id`: Unique identifier for the submission or review item
75
- * `date`: Submission or review date
76
- * `type`: Item type (e.g., Full Paper)
77
- * `title`: Paper title
78
- * `abstract`: Paper abstract
79
- * `reviews`: A list of review objects, each containing:
80
-
81
- * `reviewer`: Anonymized reviewer identifier
82
- * `date`: Review submission date
83
- * `suggestion`: Reviewer recommendation (e.g., accept, reject)
84
- * `comment`: Free-text review content
85
-
86
-
87
  ## ๐Ÿ“Œ Usage Example
88
 
89
  ```python
@@ -96,7 +54,7 @@ processed_reviews = dataset["processed"]
96
  print(processed_reviews[0])
97
 
98
  # Access human annotations
99
- human_data = dataset["human_annotation"]
100
  print(human_data[0])
101
  ```
102
 
 
14
  configs:
15
  - config_name: RottenReviews
16
  data_files:
17
+ - split: human_annotation_data
 
 
 
 
 
 
 
 
 
 
 
 
18
  path:
19
  - human_annotation_data.jsonl
20
  ---
 
27
 
28
  **RottenReviews** is a benchmark dataset designed to facilitate research on **peer review quality assessment** using multiple types of evaluation signals, including human expert annotations, structured metrics derived from textual features, and large language model (LLM)-based judgments.
29
 
30
+ Note: This HF repo only contains the raw files and the human annotation data records. Some dataset components are available only in our Google Drive. Follow repository documentation for downloading the processed files.
31
+
32
  ## ๐Ÿง  Dataset Summary
33
 
34
  Peer review quality is central to the scientific publishing process, but systematic evaluation at scale is challenging. The **RottenReviews** dataset addresses this gap by providing a large corpus of academic peer reviews enriched with reviewer metadata and multiple quality indicators:
 
42
  The dataset was introduced to support research on benchmarking and modeling peer review quality at scale. It contains thousands of submissions and reviewer profiles, making it one of the most comprehensive resources for peer review quality analysis.
43
 
44
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
45
  ## ๐Ÿ“Œ Usage Example
46
 
47
  ```python
 
54
  print(processed_reviews[0])
55
 
56
  # Access human annotations
57
+ human_data = dataset["human_annotation_data"]
58
  print(human_data[0])
59
  ```
60