File size: 4,577 Bytes
4f54fa9
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
---
license: mit
dataset_info:
  features:
  - name: id
    dtype: string
  - name: question
    dtype: string
  - name: context
    dtype: string
  - name: choices
    sequence: string
  - name: label
    dtype: int64
  splits:
  - name: train
    num_bytes: 63920351
    num_examples: 2523
  - name: validation
    num_bytes: 52064930
    num_examples: 2086
  download_size: 5955070
  dataset_size: 115985281
configs:
- config_name: default
  data_files:
  - split: train
    path: data/train-*
  - split: validation
    path: data/validation-*
---


This dataset is derived from `tau/scrolls` [dataset](tau/scrolls) by running the following script:

```python
import re

from datasets import load_dataset


def _normalize_answer(text):
    return " ".join(text.split()).strip()


def _drop_duplicates_in_input(untokenized_dataset):
    # from scrolls/evaluator/dataset_evaluator.py

    indices_to_keep = []
    id_to_idx = {}
    outputs = []
    for i, (id_, output) in enumerate(
        zip(untokenized_dataset["id"], untokenized_dataset["output"])
    ):
        if id_ in id_to_idx:
            outputs[id_to_idx[id_]].append(output)
            continue
        indices_to_keep.append(i)
        id_to_idx[id_] = len(outputs)
        outputs.append([output])
    untokenized_dataset = untokenized_dataset.select(indices_to_keep).flatten_indices()
    untokenized_dataset = untokenized_dataset.remove_columns("output")
    untokenized_dataset = untokenized_dataset.add_column("outputs", outputs)
    return untokenized_dataset


def _process_doc_prepended_question(doc):
    input = doc["input"]
    split = input.find("\n\n")
    return {
        "id": doc["id"],
        "pid": doc["pid"],
        "input": input,
        "outputs": doc["outputs"],
        "question": input[0:split],
        "text": input[split + 2 :],
    }


def process_doc(doc):
    quality_multiple_choice_pattern = re.compile(r" *\([A-D]\) *")
    doc = _process_doc_prepended_question(doc)

    split = doc["text"].find("\n\n", doc["text"].find("(D)"))
    choices_text = doc["text"][:split]

    doc["text"] = doc["text"][split:].strip()
    doc["choices"] = [
        _normalize_answer(choice)
        for choice in re.split(quality_multiple_choice_pattern, choices_text)[1:]
    ]
    doc["gold"] = doc["choices"].index(_normalize_answer(doc["outputs"][0]))
    return doc


def get_quality_dataset():
    """
    download and processes the quality dataset following the lm-evaluation-harness scrolls_quality task

    The processed dataset has the following train & validation splits with 2523 & 2086 examples respectively.
    fields to be used during evaluation:
    - question: the question prompt
    - text: the context
    - choices: list of choices (4 in total)
    - gold: index of the correct choice
    """
    quality_dataset = load_dataset("tau/scrolls", "quality")
    del quality_dataset["test"]  # drop test split -> no ground truths
    for split in quality_dataset:
        quality_dataset[split] = _drop_duplicates_in_input(quality_dataset[split])
    quality_dataset = quality_dataset.map(process_doc)
    return quality_dataset


quality_dataset = get_quality_dataset()
quality_dataset = quality_dataset.rename_columns({"text": "context", "gold": "label"})
quality_dataset = quality_dataset.remove_columns(["pid", "input", "outputs"])
train_ds = quality_dataset["train"]
validation_ds = quality_dataset["validation"]
```

The processing code is adapted from [lm-evaluation-harness scrolls task](https://github.com/EleutherAI/lm-evaluation-harness/blob/main/lm_eval/tasks/scrolls/task.py)

---
Relevant sections from the [SCROLLS: Standardized CompaRison Over Long Language Sequences paper](https://arxiv.org/pdf/2201.03533)
```
QuALITY (Pang et al., 2021): A multiplechoice question answering dataset over stories
and articles sourced from Project Gutenberg,10 the
Open American National Corpus (Fillmore et al.,
1998; Ide and Suderman, 2004), and more. Experienced writers wrote questions and distractors, and
were incentivized to write answerable, unambiguous questions such that in order to correctly answer
them, human annotators must read large portions
of the given document. To measure the difficulty
of their questions, Pang et al. conducted a speed
validation process, where another set of annotators
were asked to answer questions given only a short
period of time to skim through the document. As
a result, 50% of the questions in QuALITY are
labeled as hard, i.e. the majority of the annotators in the speed validation setting chose the wrong
answer.
```