id stringlengths 9 13 | content unknown | decision stringclasses 13
values | reviews listlengths 3 12 | metareview listlengths 1 3 | sentence_texts listlengths 15 712 | opinion_groups listlengths 1 114 | conflicts_validation listlengths 1 155 | rebuttal_validation listlengths 1 155 | opinions listlengths 1 155 | PDF_path stringlengths 36 43 | PDF_version stringclasses 4
values | MD_path stringlengths 34 41 | conference stringclasses 13
values |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
rk9eAFcxg | {
"title": "Variational Recurrent Adversarial Deep Domain Adaptation",
"abstract": "We study the problem of learning domain invariant representations for time series data while transferring the complex temporal latent dependencies between the domains. Our model termed as Variational Recurrent Adversarial Deep Domai... | Accept (Poster) | [
[
{
"role": "Reviewer",
"data": {
"value": {
"question": [
0,
1,
2
]
}
}
},
{
"role": "Author",
"data": {
"value": {
"comment": [
3,
4,
5,
... | [
[
{
"role": "PC",
"data": {
"comment": "The paper offers a contribution to domain adaptation. The novelty with respect to methodology is modest, utilizing an existing variational RNN formulation and adversarial training method in this setting. But the application is important and results are... | [
"1. p. 7, 4.2, Could you provide more details for the baseline models (e.g. for DANN or R-DANN)? Diagrams with exact number filter maps/neurons and layer wiring would be perfect.",
"2. Have the authors tried simpler alternatives to adversarial training, e.g. MMD (reference: Long, 2015, ICML)?",
"3. Do you think... | [
[
17
],
[
21
],
[
23
],
[
4
],
[
12
],
[
10,
15
],
[
9,
14,
19
],
[
1
],
[
7
],
[
13
],
[
2
],
[
3
],
[
8
],
[
16
],
[
20
],
[
22
],
[
0
],
[
11
],
[
5
],... | [
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"incorrect",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct"
] | [
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"incorrect",
"correct",
"incorrect",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct"
] | [
{
"sentences": [
{
"role": "Reviewer 1 Further Reply",
"data": [
0
]
},
{
"role": "Author",
"data": [
4,
83
]
}
],
"category": [
"QUAL-REP"
]
},
{
"sentences": [
{
"role"... | benchmark/PDF/ICLR2017_rk9eAFcxg.pdf | openreview | benchmark/MD/ICLR2017_rk9eAFcxg.md | ICLR 2017 |
r1X3g2_xl | {
"TL;DR": "",
"title": "Adversarial Training Methods for Semi-Supervised Text Classification",
"abstract": "Adversarial training provides a means of regularizing supervised learning algorithms while virtual adversarial training is able to extend supervised learning algorithms to the semi-supervised setting.\nHow... | Accept (Poster) | [
[
{
"role": "Reviewer",
"data": {
"value": {
"question": [
0,
1
]
}
}
},
{
"role": "Author",
"data": {
"value": {
"comment": [
2,
3,
4,
5
... | [
[
{
"role": "PC",
"data": {
"comment": "This paper is concerned with extending adversarial and virtual adversarial training to text classification tasks. The main technical contribution is to apply perturbations to word embeddings rather than discrete input symbols. Excellent empirical perfo... | [
"Hi,\nIs the virtual adversarial training used in the pre-training phase?",
"Also In the results of Table 2, for the line of \"Virtual Adv\" and \"Adv+Virtual Adv\", the cost term of formula (3) are applied to all the samples, or only applied to unlabeled samples?",
"Thanks for your comments!\n>Is the virtual a... | [
[
15
],
[
4
],
[
5
],
[
7,
9
],
[
8
],
[
14
],
[
16
],
[
17
],
[
6
],
[
10
],
[
12
],
[
0
],
[
1
],
[
2
],
[
3
],
[
11
],
[
13
]
] | [
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct"
] | [
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct"
] | [
{
"sentences": [
{
"role": "Reviewer 1 Further Reply",
"data": [
0,
1
]
},
{
"role": "Author",
"data": [
2,
3,
4,
5
]
}
],
"category": [
"QUAL-REP"
]
},
{
... | benchmark/PDF/ICLR2017_r1X3g2_xl.pdf | openreview | benchmark/MD/ICLR2017_r1X3g2_xl.md | ICLR 2017 |
r1aPbsFle | {
"TL;DR": "",
"title": "Tying Word Vectors and Word Classifiers: A Loss Framework for Language Modeling",
"abstract": "Recurrent neural networks have been very successful at predicting sequences of words in tasks such as language modeling. However, all such models are based on the conventional classification fra... | Accept (Poster) | [
[
{
"role": "Reviewer",
"data": {
"value": {
"question": [
0,
1,
2
]
}
}
},
{
"role": "Author",
"data": {
"value": {
"comment": [
3,
4,
5,
... | [
[
{
"role": "PC",
"data": {
"comment": "pros:\n - nice results on the tasks that justify acceptance of the paper\n \n cons:\n - In my opinion its a big stretch to describe this paper as a novel framework. The reasons for using the specific contrived augmented loss is based on the good result... | [
"- Have you tried your loss on a different dataset than PTB? (maybe now considered small for this task)",
"- There are many hyper-parameters (tau, beta, etc) in the proposed loss; how are they chosen (and what are the values for the reported results)?",
"- How important is tying with respect to training set siz... | [
[
8
],
[
9
],
[
20,
21
],
[
19
],
[
11
],
[
0
],
[
2
],
[
12
],
[
13
],
[
14
],
[
3,
5,
6
],
[
15
],
[
16,
17,
22
],
[
1
],
[
4
],
[
7
],
[
10
],
[
18
],
... | [
"correct",
"correct",
"correct",
"incorrect",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct"
] | [
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"incorrect",
"correct",
"correct"
] | [
{
"sentences": [
{
"role": "Reviewer 1 Further Reply",
"data": [
0
]
},
{
"role": "Author",
"data": [
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
... | benchmark/PDF/ICLR2017_r1aPbsFle.pdf | openreview | benchmark/MD/ICLR2017_r1aPbsFle.md | ICLR 2017 |
S1c2cvqee | {
"TL;DR": "A Q-learning algorithm for automatically generating neural nets",
"title": "Designing Neural Network Architectures using Reinforcement Learning",
"abstract": "At present, designing convolutional neural network (CNN) architectures requires both human expertise and labor. New architectures are handcraft... | Accept (Poster) | [
[
{
"role": "Reviewer",
"data": {
"value": {
"question": [
0
]
}
}
},
{
"role": "Author",
"data": {
"value": {
"comment": [
1,
2,
3,
4,
5
... | [
[
{
"role": "PC",
"data": {
"comment": "This paper comes up with a novel approach to searching the space of architectures for deep neural networks using reinforcement learning. The idea is straightforward and sensible: use a reinforcement learning strategy to iteratively grow a deep net grap... | [
"How do you position yourself to: https://arxiv.org/abs/1606.02492\n\"Convolutional Neural Fabrics\"",
"Thanks for pointing us to the Computational Neural Fabrics (CNF) work. We will include a citation and comparison in our paper. CNF bypasses the architecture selection process by creating a much wider network wi... | [
[
7
],
[
18,
22
],
[
0
],
[
15
],
[
6,
10,
11
],
[
8
],
[
13
],
[
14
],
[
19
],
[
20
],
[
21
],
[
9
],
[
12
],
[
2
],
[
3
],
[
5
],
[
16
],
[
4
],
[
1
],
... | [
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct"
] | [
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct"
] | [
{
"sentences": [
{
"role": "Reviewer 1 Further Reply",
"data": [
0
]
},
{
"role": "Author",
"data": [
1,
2,
3,
4,
5
]
}
],
"category": [
"QUAL-CMP"
]
},
{
... | benchmark/PDF/ICLR2017_S1c2cvqee.pdf | openreview | benchmark/MD/ICLR2017_S1c2cvqee.md | ICLR 2017 |
Hkg4TI9xl | {
"title": "A Baseline for Detecting Misclassified and Out-of-Distribution Examples in Neural Networks",
"abstract": "We consider the two related problems of detecting if an example is misclassified or out-of-distribution. We present a simple baseline that utilizes probabilities from softmax distributions. Correctl... | Accept (Poster) | [
[
{
"role": "Reviewer",
"data": {
"value": {
"question": [
0,
1
]
}
}
},
{
"role": "Author",
"data": {
"value": {
"comment": [
2,
3,
4,
5,
... | [
[
{
"role": "PC",
"data": {
"comment": "The paper presents an approach that uses the statistics of softmax outputs to identify misclassifications and/or outliers. The reviewers had mostly minor comments on the paper, which appear to have been appropriately addressed in the revised version of... | [
"It would be interesting to also include a generative baseline for out-of-domain classification.",
"For example, on the TIMIT dataset, one could train a generative GMM-HMM system where phone durations are modeled using a Hidden-Markov-Model and output probabilities are modeled using Gaussian Mixture Models. One c... | [
[
6,
9
],
[
11
],
[
16
],
[
0,
7
],
[
12
],
[
13
],
[
14
],
[
15
],
[
1
],
[
3
],
[
10
],
[
5
],
[
4
],
[
2
],
[
8
]
] | [
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct"
] | [
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct"
] | [
{
"sentences": [
{
"role": "Reviewer 1 Further Reply",
"data": [
0,
1
]
},
{
"role": "Author",
"data": [
2,
3,
4,
5,
6
]
}
],
"category": [
"QUAL-CMP"
]... | benchmark/PDF/ICLR2017_Hkg4TI9xl.pdf | openreview | benchmark/MD/ICLR2017_Hkg4TI9xl.md | ICLR 2017 |
SkpSlKIel | {
"TL;DR": "",
"title": "Why Deep Neural Networks for Function Approximation?",
"abstract": "Recently there has been much interest in understanding why deep neural networks are preferred to shallow networks. We show that, for a large class of piecewise smooth functions, the number of neurons needed by a shallow n... | Accept (Poster) | [
[
{
"role": "Reviewer",
"data": {
"value": {
"question": [
0,
1,
2
]
}
}
},
{
"role": "Author",
"data": {
"value": {
"comment": [
3,
4,
5,
... | [
[
{
"role": "PC",
"data": {
"comment": "The paper makes a solid technical contribution in proving that the deep networks are exponentially more efficient in function approximation compared to the shallow networks. They take the case of piecewise smooth networks, which is practically motivate... | [
"Section 5 claims that exponentially more units are needed when using more shallow networks, which seems to be referring to Corollary 12, but precisely this statement is given without a proof and only pointing to Theorem 11.",
"Is the conclusion of the paper referring to some statement other than Corollary 12?",
... | [
[
0
],
[
15
],
[
16
],
[
18
],
[
1
],
[
11
],
[
12
],
[
17
],
[
5
],
[
2
],
[
14
],
[
3
],
[
4
],
[
7
],
[
8
],
[
10
],
[
6
],
[
9
],
[
13
]
] | [
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct"
] | [
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"incorrect",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct"
] | [
{
"sentences": [
{
"role": "Reviewer 1 Further Reply",
"data": [
0,
2
]
},
{
"role": "Author",
"data": [
3,
4,
5
]
}
],
"category": [
"CLAR-NOT",
"CLAR-WRT"
]
},
... | benchmark/PDF/ICLR2017_SkpSlKIel.pdf | openreview | benchmark/MD/ICLR2017_SkpSlKIel.md | ICLR 2017 |
Byk-VI9eg | {
"title": "Generative Multi-Adversarial Networks",
"abstract": "Generative adversarial networks (GANs) are a framework for producing a generative model by way of a two-player minimax game. In this paper, we propose the \\emph{Generative Multi-Adversarial Network} (GMAN), a framework that extends GANs to multiple ... | Accept (Poster) | [
[
{
"role": "Reviewer",
"data": {
"value": {
"question": [
0,
1
]
}
}
},
{
"role": "Author",
"data": {
"value": {
"comment": [
2,
3,
4,
5,
... | [
[
{
"role": "PC",
"data": {
"comment": "Using an ensemble in the discriminator portion of a GAN is a sensible idea, and it is well explored and described in this paper. Further clarification and exploration of how the multiple discriminators are combined (max versus averaging versus weighted... | [
"1. How is the max value computed? Is it approximated with a finite number of samples?",
"2. In Section 3.1's second paragraph, the \"prime\" functions (D', V') don't appear to be mentioned anywhere else. What are they?",
"1. The max is over the N players and is taken at each time step.",
"For example, at tim... | [
[
5
],
[
18,
22
],
[
1
],
[
0
],
[
4
],
[
6,
24
],
[
26
],
[
11,
12,
17,
20
],
[
13
],
[
14
],
[
15
],
[
21
],
[
2,
3
],
[
10
],
[
7
],
[
8
],
[
9
],
[
... | [
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"cor... | [
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"incorrect",
"correct",
"correct",
"incorrect",
"incorrect",
"correct",
"correct",
"correct",
"correct",
"correct",
... | [
{
"sentences": [
{
"role": "Reviewer 1 Further Reply",
"data": [
0
]
},
{
"role": "Author",
"data": [
2,
3,
4
]
}
],
"category": [
"CLAR-WRT"
]
},
{
"sentences": [
{
... | benchmark/PDF/ICLR2017_Byk-VI9eg.pdf | openreview | benchmark/MD/ICLR2017_Byk-VI9eg.md | ICLR 2017 |
BJrFC6ceg | {
"title": "PixelCNN++: Improving the PixelCNN with Discretized Logistic Mixture Likelihood and Other Modifications",
"abstract": "PixelCNNs are a recently proposed class of powerful generative models with tractable likelihood. Here we discuss our implementation of PixelCNNs which we make available at https://githu... | Accept (Poster) | [
[
{
"role": "Reviewer",
"data": {
"value": {
"question": [
0
]
}
}
},
{
"role": "Author",
"data": {
"value": {
"comment": [
1
]
}
}
}
],
[
{
"role": "Revi... | [
[
{
"role": "PC",
"data": {
"comment": " The authors acknowledge that the ideas in the paper are incremental, but assert these are not-trivial improvements upon prior work on pixel CNNs. The reviewers tended to agree with this characterization. The paper presents SOTA pixel likelihood result... | [
"Did you try different ordering (ex. starting from the bottom right corner instead and going upward and leftward)? If so was the results similar?",
"We used the same ordering as the original PixelCNN and did not explore alternatives. My guess is that it would not matter much, but it's worth trying.",
"1. Do you... | [
[
17
],
[
18
],
[
16
],
[
15
],
[
21,
25
],
[
26
],
[
29
],
[
2,
7
],
[
8
],
[
24,
27
],
[
0
],
[
1
],
[
9
],
[
10
],
[
13
],
[
14
],
[
19
],
[
23
],
[
3
... | [
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"cor... | [
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"cor... | [
{
"sentences": [
{
"role": "Reviewer 1 Further Reply",
"data": [
0
]
},
{
"role": "Author",
"data": [
1
]
}
],
"category": [
"QUAL-EXP"
]
},
{
"sentences": [
{
"role": "Reviewer 2 ... | benchmark/PDF/ICLR2017_BJrFC6ceg.pdf | openreview | benchmark/MD/ICLR2017_BJrFC6ceg.md | ICLR 2017 |
rJ8uNptgl | {"TL;DR":"","title":"Towards the Limit of Network Quantization","abstract":"Network quantization is (...TRUNCATED) | Accept (Poster) | [[{"role":"Reviewer","data":{"value":{"question":[0,1,2,3]}}},{"role":"Author","data":{"value":{"com(...TRUNCATED) | [[{"role":"PC","data":{"comment":"The paper proposes using quantization schemes to compress the weig(...TRUNCATED) | ["1) For the case of Huffman coding we see that Uniform quantization works very well actually compar(...TRUNCATED) | [[1],[2,8,11,21],[3],[9],[13],[18],[25,26,27,28],[7],[17,20],[23],[15],[22],[0],[4],[24],[5],[6],[10(...TRUNCATED) | ["correct","correct","correct","correct","correct","correct","correct","correct","correct","correct"(...TRUNCATED) | ["incorrect","correct","correct","correct","correct","correct","correct","correct","correct","correc(...TRUNCATED) | [{"sentences":[{"role":"Reviewer 1 Further Reply","data":[0]},{"role":"Author","data":[4,5,6,7,8,24](...TRUNCATED) | benchmark/PDF/ICLR2017_rJ8uNptgl.pdf | openreview | benchmark/MD/ICLR2017_rJ8uNptgl.md | ICLR 2017 |
Sy6iJDqlx | {"title":"Attend, Adapt and Transfer: Attentive Deep Architecture for Adaptive Transfer from multipl(...TRUNCATED) | Accept (Poster) | [[{"role":"Reviewer","data":{"value":{"question":[0,1]}}},{"role":"Author","data":{"value":{"comment(...TRUNCATED) | [[{"role":"PC","data":{"comment":"The authors present a mixture of experts framework to combine lear(...TRUNCATED) | ["In Eq. 12, wouldn’t it make each Q_b converge to the same action-value function given by Q_T, an(...TRUNCATED) | [
[
13
],
[
15
],
[
17
],
[
6
],
[
10
],
[
16
],
[
11
],
[
1
],
[
3,
18
],
[
4
],
[
7,
14
],
[
8
],
[
9
],
[
19
],
[
20
],
[
0
],
[
12
],
[
2
],
[
5
]
] | ["correct","correct","correct","incorrect","correct","correct","correct","incorrect","correct","corr(...TRUNCATED) | ["correct","correct","correct","correct","correct","correct","correct","correct","correct","correct"(...TRUNCATED) | [{"sentences":[{"role":"Reviewer 1 Further Reply","data":[0]},{"role":"Author","data":[2,3,4,5,6]}],(...TRUNCATED) | benchmark/PDF/ICLR2017_Sy6iJDqlx.pdf | openreview | benchmark/MD/ICLR2017_Sy6iJDqlx.md | ICLR 2017 |
SupraReviewBench
Dataset summary
SupraReviewBench is a peer-review benchmark built from OpenReview discussion threads. Each record represents one paper and its full review discussion. Reviewer opinions are split into atomic blocks, labeled with a taxonomy, grouped by discussion point, and validated for correctness via conflict adjudication and author-refutation analysis.
The dataset is intended for opinion-level evaluation and training, with explicit labels that mark which reviewer opinions are likely correct or incorrect.
Dataset viewer
The Dataset Viewer reads benchmark/benchmark.jsonl. The YAML config above
declares a single train split and an explicit schema so the Viewer renders
columns normally instead of wrapping records into a single text field.
Source and coverage
- Source: OpenReview discussions (ICLR and NeurIPS). Use the
conferencefield in each record to see the exact venue and year. - Unit: one paper (OpenReview forum id).
- Language: primarily English.
Data format and fields
The dataset is stored as a JSONL file named benchmark/benchmark.jsonl.
Each JSONL line is a single JSON object with the following top-level fields.
Some fields are optional depending on the paper or venue.
Core fields:
id: OpenReview forum id for the paper (string, unique).conference: venue label (e.g., "ICLR 2017").content: paper metadata from OpenReview (title, abstract, authors, pdf path, etc.).decision: acceptance decision string.reviews: review discussion threads, each a list of[role, payload]pairs.metareview: meta-review threads with the same[role, payload]structure.sentence_texts: list of atomic sentences; indices are referenced elsewhere.opinions: list of labeled opinion blocks (see below).opinion_groups: list of groups; each group is a list of opinion indices that discuss the same point.conflicts_validation: list of "correct"/"incorrect" labels aligned toopinions.rebuttal_validation: list of "correct"/"incorrect" labels aligned toopinions.
Opinion block structure:
Each entry in opinions is a 2-element list:
sources: list of[role, [sentence_ids]]pairstags: list of taxonomy labels (multi-label)
Example (simplified):
{
"id": "rk9eAFcxg",
"conference": "ICLR 2017",
"opinions": [
[
[["Reviewer 1", [0, 1]], ["Author", [4, 5]]],
["QUAL-EXP", "QUAL-CMP"]
]
],
"opinion_groups": [[0]],
"conflicts_validation": ["correct"],
"rebuttal_validation": ["correct"],
"PDF_path": "benchmark/PDF/ICLR2017_rk9eAFcxg.pdf",
"MD_path": "benchmark/MD/ICLR2017_rk9eAFcxg.md"
}
Taxonomy labels
Labels follow a fixed taxonomy with 5 coarse categories and sublabels:
- QUAL (Quality): QUAL-MET, QUAL-EXP, QUAL-REP, QUAL-CMP, QUAL-STA
- CLAR (Clarity): CLAR-WRT, CLAR-NOT, CLAR-FIG
- SIGN (Significance): SIGN-BRD, SIGN-DOM, SIGN-SOT, SIGN-IMP
- ORIG (Originality): ORIG-PROB, ORIG-MTH, ORIG-ANL, ORIG-EXP, ORIG-COM, ORIG-NEG
- POL (Policy/Compliance): POL-ETH, POL-DAT, POL-ANO, POL-PLG, POL-IMP
- N/A: polite text or non-substantive content
Annotations and validation
Two validation signals are provided, each aligned to opinions:
conflicts_validation: results of reviewer opinion conflict adjudication.rebuttal_validation: results of author refutation validation.
Values are "correct" or "incorrect".
PDF and Markdown files
PDF_path and MD_path are string paths to local assets used during curation.
These files are not included in the dataset repo (PDFs are too large). The fields
remain as strings and do not affect Dataset Viewer loading.
Intended use
This dataset is designed for:
- multi-label classification of reviewer opinions
- opinion grouping and conflict detection
- evaluation of reviewer correctness and disagreement
It is not intended for ranking papers or making accept/reject decisions.
Limitations
- Labels are produced with LLM assistance and are not perfect.
- Some venues and years may have missing or incomplete review metadata.
- PDF and Markdown assets are not included in the dataset repo.
License
This dataset is released under CC BY 4.0.
Citation
If you use this dataset, please cite the associated paper or this repository. Add a BibTeX entry here if you have a preferred citation.
- Downloads last month
- 16