SupraReviewBench / README.md
hexuandeng's picture
Update README.md
5a03d51 verified
metadata
pretty_name: SupraReviewBench
language:
  - en
task_categories:
  - text-generation
  - question-answering
size_categories:
  - 1K<n<10K
license: cc-by-4.0
configs:
  - config_name: default
    data_files:
      - split: train
        path: benchmark.jsonl

SupraReviewBench

Dataset summary

SupraReviewBench is a peer-review benchmark built from OpenReview discussion threads. Each record represents one paper and its full review discussion. Reviewer opinions are split into atomic blocks, labeled with a taxonomy, grouped by discussion point, and validated for correctness via conflict adjudication and author-refutation analysis.

The dataset is intended for opinion-level evaluation and training, with explicit labels that mark which reviewer opinions are likely correct or incorrect.

Dataset viewer

The Dataset Viewer reads benchmark/benchmark.jsonl. The YAML config above declares a single train split and an explicit schema so the Viewer renders columns normally instead of wrapping records into a single text field.

Source and coverage

  • Source: OpenReview discussions (ICLR and NeurIPS). Use the conference field in each record to see the exact venue and year.
  • Unit: one paper (OpenReview forum id).
  • Language: primarily English.

Data format and fields

The dataset is stored as a JSONL file named benchmark/benchmark.jsonl. Each JSONL line is a single JSON object with the following top-level fields. Some fields are optional depending on the paper or venue.

Core fields:

  • id: OpenReview forum id for the paper (string, unique).
  • conference: venue label (e.g., "ICLR 2017").
  • content: paper metadata from OpenReview (title, abstract, authors, pdf path, etc.).
  • decision: acceptance decision string.
  • reviews: review discussion threads, each a list of [role, payload] pairs.
  • metareview: meta-review threads with the same [role, payload] structure.
  • sentence_texts: list of atomic sentences; indices are referenced elsewhere.
  • opinions: list of labeled opinion blocks (see below).
  • opinion_groups: list of groups; each group is a list of opinion indices that discuss the same point.
  • conflicts_validation: list of "correct"/"incorrect" labels aligned to opinions.
  • rebuttal_validation: list of "correct"/"incorrect" labels aligned to opinions.

Opinion block structure: Each entry in opinions is a 2-element list:

  1. sources: list of [role, [sentence_ids]] pairs
  2. tags: list of taxonomy labels (multi-label)

Example (simplified):

{
  "id": "rk9eAFcxg",
  "conference": "ICLR 2017",
  "opinions": [
    [
      [["Reviewer 1", [0, 1]], ["Author", [4, 5]]],
      ["QUAL-EXP", "QUAL-CMP"]
    ]
  ],
  "opinion_groups": [[0]],
  "conflicts_validation": ["correct"],
  "rebuttal_validation": ["correct"],
  "PDF_path": "benchmark/PDF/ICLR2017_rk9eAFcxg.pdf",
  "MD_path": "benchmark/MD/ICLR2017_rk9eAFcxg.md"
}

Taxonomy labels

Labels follow a fixed taxonomy with 5 coarse categories and sublabels:

  • QUAL (Quality): QUAL-MET, QUAL-EXP, QUAL-REP, QUAL-CMP, QUAL-STA
  • CLAR (Clarity): CLAR-WRT, CLAR-NOT, CLAR-FIG
  • SIGN (Significance): SIGN-BRD, SIGN-DOM, SIGN-SOT, SIGN-IMP
  • ORIG (Originality): ORIG-PROB, ORIG-MTH, ORIG-ANL, ORIG-EXP, ORIG-COM, ORIG-NEG
  • POL (Policy/Compliance): POL-ETH, POL-DAT, POL-ANO, POL-PLG, POL-IMP
  • N/A: polite text or non-substantive content

Annotations and validation

Two validation signals are provided, each aligned to opinions:

  • conflicts_validation: results of reviewer opinion conflict adjudication.
  • rebuttal_validation: results of author refutation validation.

Values are "correct" or "incorrect".

PDF and Markdown files

PDF_path and MD_path are string paths to local assets used during curation. These files are not included in the dataset repo (PDFs are too large). The fields remain as strings and do not affect Dataset Viewer loading.

Intended use

This dataset is designed for:

  • multi-label classification of reviewer opinions
  • opinion grouping and conflict detection
  • evaluation of reviewer correctness and disagreement

It is not intended for ranking papers or making accept/reject decisions.

Limitations

  • Labels are produced with LLM assistance and are not perfect.
  • Some venues and years may have missing or incomplete review metadata.
  • PDF and Markdown assets are not included in the dataset repo.

License

This dataset is released under CC BY 4.0.

Citation

If you use this dataset, please cite the associated paper or this repository. Add a BibTeX entry here if you have a preferred citation.