Daoze commited on
Commit
8b71559
·
verified ·
1 Parent(s): a4e448a

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +16 -4
README.md CHANGED
@@ -48,12 +48,17 @@ The `papers` folder, located in `./papers`, contains the latest versions of the
48
 
49
  ### REVIEWS
50
 
51
- We organize the reviews corresponding to 19,926 papers into a dictionary keyed by each peer paper, and then merge each paper-review dictionary into a list. This list is split into two files: [REVIEWS_train](./REVIEWS_train.json) and [REVIEWS_test](./REVIEWS_test.json). Specifically, for the $\text{Re}^2$-Review dataset, we sample 1,000 papers along with their reviews to form the test set, while the remaining papers and their reviews are used as the training set.
52
 
53
 
54
  #### Review Data Format
55
 
56
- The format of the review data is below. `paper_id` refers to the unique identifier of the paper on OpenReview. `initial_score` is the score before the rebuttal, and `final_score` is the final score after the rebuttal. Fields ending with `_unified` represent the scores after unification. `review_initial_ratings_unified` and `review_final_ratings_unified` are the lists of all review scores before and after the rebuttal, respectively.
 
 
 
 
 
57
 
58
 
59
  ```json
@@ -152,11 +157,18 @@ The format of the review data is below. `paper_id` refers to the unique identifi
152
 
153
  ### REBUTTALS
154
 
155
- As for the rebuttal, we model the rebuttal and discussion data as a multi-turn conversation between reviewers and authors. This conversational setup enables the training and evaluation of dynamic, interactive LLM-based reviewing assistants, which can offer authors more practical and actionable guidance to improve their work before submission. To facilitate this, we split the rebuttal data into two files: [REBUTTAL_train](./REBUTTAL_train.json) and [REBUTTAL_test](./REBUTTAL_test.json). Specifically, for the $\text{Re}^2$-Rebuttal dataset, we select 500 papers along with their rebuttals as the test set, while the remaining data is used for training.
156
 
157
  #### Rebuttal Data Format
158
 
159
- The format of the review data is below. `paper_id` refers to the unique identifier of the paper on OpenReview. `messages` is formatted as a multi-turn conversation., and `final_score` is the final score after the rebuttal. When the `role` is set to system, it defines the overall context for the entire rebuttal multi-turn dialogue. The first message with the `role` of user serves to trigger the review process, providing the model with the paper information and confirming its identity as a reviewer. Subsequent messages with the `role` of user serve as the author's responses. Messages with the `role` of assistant serve as the reviewer's comments or replies during the discussion.
 
 
 
 
 
 
 
160
 
161
  ```json
162
  [
 
48
 
49
  ### REVIEWS
50
 
51
+ We organize the reviews corresponding to 19,926 papers into a dictionary keyed by each peer paper, and then merge each paper-review dictionary into a list. This list is split into two files: [REVIEWS_train](./REVIEWS_train.json) and [REVIEWS_test](./REVIEWS_test.json). Specifically, for the Re²-Review dataset, we sample 1,000 papers along with their reviews to form the test set, while the remaining papers and their reviews are used as the training set.
52
 
53
 
54
  #### Review Data Format
55
 
56
+ The format of the review data is below.
57
+ - `paper_id` refers to the unique identifier of the paper on OpenReview
58
+ - `initial_score` is the score before the rebuttal
59
+ - `final_score` is the final score after the rebuttal
60
+ - Fields ending with `_unified` represent the scores after unification
61
+ - `review_initial_ratings_unified` and `review_final_ratings_unified` are the lists of all review scores before and after the rebuttal, respectively
62
 
63
 
64
  ```json
 
157
 
158
  ### REBUTTALS
159
 
160
+ As for the rebuttal, we model the rebuttal and discussion data as a multi-turn conversation between reviewers and authors. This conversational setup enables the training and evaluation of dynamic, interactive LLM-based reviewing assistants, which can offer authors more practical and actionable guidance to improve their work before submission. To facilitate this, we split the rebuttal data into two files: [REBUTTAL_train](./REBUTTAL_train.json) and [REBUTTAL_test](./REBUTTAL_test.json). Specifically, for the Re²-Rebuttal dataset, we select 500 papers along with their rebuttals as the test set, while the remaining data is used for training.
161
 
162
  #### Rebuttal Data Format
163
 
164
+ The format of the review data is below.
165
+ - `paper_id` refers to the unique identifier of the paper on OpenReview
166
+ - `messages` is formatted as a multi-turn conversation
167
+ - `final_score` is the final score after the rebuttal
168
+ - When the `role` is set to system, it defines the overall context for the entire rebuttal multi-turn dialogue
169
+ - The first message with the `role` of user serves to trigger the review process, providing the model with the paper information and confirming its identity as a reviewer
170
+ - Subsequent messages with the `role` of user serve as the author's responses
171
+ - Messages with the `role` of assistant serve as the reviewer's comments or replies during the discussion
172
 
173
  ```json
174
  [