boda commited on
Commit
6329e91
Β·
verified Β·
1 Parent(s): 9ee03b6

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +81 -0
README.md CHANGED
@@ -448,3 +448,84 @@ configs:
448
  - split: full
449
  path: verifiability/full-*
450
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
448
  - split: full
449
  path: verifiability/full-*
450
  ---
451
+
452
+
453
+ # RevUtil: Measuring the Utility of Peer Reviews for Authors
454
+
455
+ [πŸ“„ Paper COMING SOON!]
456
+ [πŸ’» GitHub Repository](https://github.com/bodasadallah/RevUtil)
457
+
458
+ ---
459
+
460
+ ## πŸ“š Overview
461
+
462
+ Providing **constructive feedback** to authors is a key goal of peer review. To support research on evaluating and generating useful peer review comments, we introduce **RevUtil**, a dataset for measuring the utility of peer review feedback.
463
+
464
+ RevUtil focuses on four main aspects of review comments:
465
+
466
+ - **Actionability** – Can the author act on the comment?
467
+ - **Grounding & Specificity** – Is the comment concrete and tied to the paper?
468
+ - **Verifiability** – Can the statement be checked against the paper?
469
+ - **Helpfulness** – Does the comment assist the author in improving their work?
470
+
471
+ ---
472
+
473
+ ## πŸ§‘β€πŸ”¬ RevUtil Human
474
+
475
+ - **1,430** review comments from real peer reviews.
476
+ - Each comment is annotated independently by **three human raters**.
477
+ - Labels are provided as `"gold"` (3/3 agreement), `"silver"` (2/3), or `"none"` (no agreement).
478
+
479
+ **Key columns:**
480
+
481
+ | Column | Description |
482
+ | ------------------- | --------------------------------------------------------------------------- |
483
+ | `paper_id` | ID of the reviewed paper |
484
+ | `venue` | Conference or journal name |
485
+ | `focused_review` | Full review (weakness + suggestion sections) |
486
+ | `review_point` | Individual review comment being evaluated |
487
+ | `id` | Unique ID for the review point |
488
+ | `batch` | Annotation batch/study identifier |
489
+ | `ASPECT` | Dictionary with annotators and their labels |
490
+ | `ASPECT_label` | Majority label (if available) |
491
+ | `ASPECT_label_type` | `"gold"`, `"silver"`, or `"none"` |
492
+
493
+ ---
494
+
495
+ ## πŸ€– RevUtil Synthetic
496
+
497
+ - **10,000** synthetically labeled review comments generated with **GPT-4o**.
498
+ - Each example includes both a **score** and a **rationale**.
499
+ - Split into **9k train** / **1k test**.
500
+
501
+ **Key columns:**
502
+
503
+ | Column | Description |
504
+ | -------------------------- | -------------------------------------------- |
505
+ | `paper_id` | ID of the reviewed paper |
506
+ | `venue` | Conference or journal name |
507
+ | `focused_review` | Full review (weakness + suggestion sections) |
508
+ | `review_point` | Individual review comment |
509
+ | `id` | Unique ID for the review point |
510
+ | `chatgpt_ASPECT_score` | Model-generated score for the aspect |
511
+ | `chatgpt_ASPECT_rationale` | Explanation of the score provided by GPT-4o |
512
+
513
+ ---
514
+
515
+ ## πŸš€ Usage
516
+
517
+ You can load the datasets directly via πŸ€— Datasets:
518
+
519
+ ```python
520
+ from datasets import load_dataset
521
+
522
+ # Human annotations
523
+ human = load_dataset("boda/RevUtil_human")
524
+
525
+ # Synthetic annotations
526
+ synthetic = load_dataset("boda/RevUtil_synthetic")
527
+ ```
528
+
529
+
530
+
531
+ ## πŸ“Ž Citation