Datasets:
Merge branch 'main' of https://huggingface.co/datasets/tau/zero_scrolls
Browse files- README.md +124 -0
- zero_scrolls.py +28 -8
README.md
ADDED
|
@@ -0,0 +1,124 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
language:
|
| 3 |
+
- en
|
| 4 |
+
task_categories:
|
| 5 |
+
- question-answering
|
| 6 |
+
- summarization
|
| 7 |
+
- text-generation
|
| 8 |
+
task_ids:
|
| 9 |
+
- multiple-choice-qa
|
| 10 |
+
tags:
|
| 11 |
+
- query-based-summarization
|
| 12 |
+
- long-texts
|
| 13 |
+
---
|
| 14 |
+
|
| 15 |
+
## Dataset Description
|
| 16 |
+
|
| 17 |
+
- **Homepage:** [ZeroSCROLLS](https://www.zero.scrolls-benchmark.com/)
|
| 18 |
+
- **Leaderboard:** [Leaderboard](https://www.zero.scrolls-benchmark.com/leaderboard)
|
| 19 |
+
- **Point of Contact:** [scrolls-benchmark-contact@googlegroups.com](scrolls-benchmark-contact@googlegroups.com)
|
| 20 |
+
|
| 21 |
+
# Dataset Card for ZeroSCROLLS
|
| 22 |
+
|
| 23 |
+
## Overview
|
| 24 |
+
ZeroSCROLLS is a zero-shot benchmark for natural language understanding over long texts.
|
| 25 |
+
|
| 26 |
+
## Leaderboard
|
| 27 |
+
The ZeroSCROLLS benchmark leaderboard can be found [here](https://www.zero.scrolls-benchmark.com/leaderboard).
|
| 28 |
+
|
| 29 |
+
## Tasks
|
| 30 |
+
ZeroSCROLLS contains the following tasks:
|
| 31 |
+
|
| 32 |
+
#### GovReport ([Huang et al., 2021](https://arxiv.org/pdf/2104.02112.pdf))
|
| 33 |
+
GovReport is a summarization dataset of reports addressing various national policy issues published by the
|
| 34 |
+
Congressional Research Service and the U.S. Government Accountability Office, where each document is paired with a hand-written executive summary.
|
| 35 |
+
The reports and their summaries are longer than their equivalents in other popular long-document summarization datasets;
|
| 36 |
+
for example, GovReport's documents are approximately 1.5 and 2.5 times longer than the documents in Arxiv and PubMed, respectively.
|
| 37 |
+
|
| 38 |
+
#### SummScreenFD ([Chen et al., 2022](https://arxiv.org/pdf/2104.07091.pdf))
|
| 39 |
+
SummScreenFD is a summarization dataset in the domain of TV shows (e.g. Friends, Game of Thrones).
|
| 40 |
+
Given a transcript of a specific episode, the goal is to produce the episode's recap.
|
| 41 |
+
The original dataset is divided into two complementary subsets, based on the source of its community contributed transcripts.
|
| 42 |
+
For SCROLLS, we use the ForeverDreaming (FD) subset, as it incorporates 88 different shows,
|
| 43 |
+
making it a more diverse alternative to the TV MegaSite (TMS) subset, which has only 10 shows.
|
| 44 |
+
Community-authored recaps for the ForeverDreaming transcripts were collected from English Wikipedia and TVMaze.
|
| 45 |
+
|
| 46 |
+
#### QMSum ([Zhong et al., 2021](https://arxiv.org/pdf/2104.05938.pdf))
|
| 47 |
+
QMSum is a query-based summarization dataset, consisting of 232 meetings transcripts from multiple domains.
|
| 48 |
+
The corpus covers academic group meetings at the International Computer Science Institute and their summaries, industrial product meetings for designing a remote control,
|
| 49 |
+
and committee meetings of the Welsh and Canadian Parliaments, dealing with a variety of public policy issues.
|
| 50 |
+
Annotators were tasked with writing queries about the broad contents of the meetings, as well as specific questions about certain topics or decisions,
|
| 51 |
+
while ensuring that the relevant text for answering each query spans at least 200 words or 10 turns.
|
| 52 |
+
|
| 53 |
+
#### SQuALITY ([Wang et al., 2022](https://arxiv.org/pdf/2205.11465.pdf))
|
| 54 |
+
SQuALITY (Wang et al., 2022) is a question-focused summarization dataset, where given a story from Project Gutenberg,
|
| 55 |
+
the task is to produce a summary of the story or aspects of it based on a guiding question.
|
| 56 |
+
The questions and summaries are original and crowdsourced; experienced writers were guided to design questions that require reading significant parts of the story to answer correctly.
|
| 57 |
+
|
| 58 |
+
|
| 59 |
+
#### Qasper ([Dasigi et al., 2021](https://arxiv.org/pdf/2105.03011.pdf))
|
| 60 |
+
Qasper is a question answering dataset over NLP papers filtered from the Semantic Scholar Open Research Corpus (S2ORC).
|
| 61 |
+
Questions were written by NLP practitioners after reading only the title and abstract of the papers,
|
| 62 |
+
while another set of NLP practitioners annotated the answers given the entire document.
|
| 63 |
+
Qasper contains abstractive, extractive, and yes/no questions, as well as unanswerable ones.
|
| 64 |
+
|
| 65 |
+
#### NarrativeQA ([Kočiský et al., 2018](https://arxiv.org/pdf/1712.07040.pdf))
|
| 66 |
+
NarrativeQA (Kočiský et al., 2021) is an established question answering dataset over entire books from Project Gutenberg and movie scripts from different websites.
|
| 67 |
+
Annotators were given summaries of the books and scripts obtained from Wikipedia, and asked to generate question-answer pairs,
|
| 68 |
+
resulting in about 30 questions and answers for each of the 1,567 books and scripts.
|
| 69 |
+
They were encouraged to use their own words rather then copying, and avoid asking yes/no questions or ones about the cast.
|
| 70 |
+
Each question was then answered by an additional annotator, providing each question with two reference answers (unless both answers are identical).
|
| 71 |
+
|
| 72 |
+
#### QuALITY ([Pang et al., 2022](https://arxiv.org/pdf/2112.08608.pdf))
|
| 73 |
+
QuALITY is a multiple-choice question answering dataset over articles and stories sourced from Project Gutenberg,
|
| 74 |
+
the Open American National Corpus, and more.
|
| 75 |
+
Experienced writers wrote questions and distractors, and were incentivized to write answerable, unambiguous questions such that in order to correctly answer them,
|
| 76 |
+
human annotators must read large portions of the given document.
|
| 77 |
+
Reference answers were then calculated using the majority vote between of the annotators and writer's answers.
|
| 78 |
+
To measure the difficulty of their questions, Pang et al. conducted a speed validation process,
|
| 79 |
+
where another set of annotators were asked to answer questions given only a short period of time to skim through the document.
|
| 80 |
+
As a result, 50% of the questions in QuALITY are labeled as hard, i.e. the majority of the annotators in the speed validation setting chose the wrong answer.
|
| 81 |
+
|
| 82 |
+
#### MuSiQue ([Trivedi et al., 2022](https://arxiv.org/pdf/2108.00573.pdf))
|
| 83 |
+
MuSiQue is a multi-hop question answering dataset, where the inputs are 20 Wikipedia paragraphs and a question that requires multiple hops between different paragraphs.
|
| 84 |
+
In the original dataset, each question also has an unanswerable twin question, where the correct answer is not present in the paragraphs.
|
| 85 |
+
|
| 86 |
+
#### SpaceDigest (New)
|
| 87 |
+
SpaceDigest is a new sentiment aggregation task. Given 50 hotel reviews (without their ratings) from the Space dataset (Angelidis et al., 2021), the task is to determine the percentage of positive reviews.
|
| 88 |
+
|
| 89 |
+
#### BookSumSort (New)
|
| 90 |
+
BookSumSort is a new task based on the BookSum dataset (Kry ́sci ́nski et al., 2022), which contains summaries of chapters (or parts) of novels, plays, and long poems from various sources.
|
| 91 |
+
Given a shuffled list of chapter summaries, the task is to reorder them according to the original order of summaries in BookSum.
|
| 92 |
+
|
| 93 |
+
## Data Fields
|
| 94 |
+
|
| 95 |
+
Most datasets in the benchmark are in the same input-output format
|
| 96 |
+
|
| 97 |
+
- `input`: a `string` feature. The input document.
|
| 98 |
+
- `output`: this feature is always None, as ZeroSCROLLS contains only test sets.
|
| 99 |
+
- `id`: a `string` feature. Unique per input.
|
| 100 |
+
- `pid`: a `string` feature, identical to 'id`. Facilitates evaluating tasks with multiple refrences per input.
|
| 101 |
+
- `document_start_index`: an `int32` feature. Character index that enables easy parsing of the context document.
|
| 102 |
+
- `document_end_index`: an `int32` feature. Character index that enables easy parsing of the context document.
|
| 103 |
+
- `query_start_index`: an `int32` feature. Character index that enables easy parsing of the query, if exists.
|
| 104 |
+
- `query_end_index`: an `int32` feature. Character index that enables easy parsing of the query, if exists.
|
| 105 |
+
- `truncation_seperator`: a `string` feature. The string used to append to a trimmed context document, mentioning the context was trimmed.
|
| 106 |
+
|
| 107 |
+
Datasets containing multiple documents inside the `input` feature are MuSiQue, SpaceDigest, and BookSumSort. They also have the following feature:
|
| 108 |
+
|
| 109 |
+
- `inner_docs_start_indices`: a sequence of `int32` feature. Character indexes that enables easy parsing of the the inner documents, e.g. Reviews, of Summaries.
|
| 110 |
+
|
| 111 |
+
|
| 112 |
+
|
| 113 |
+
## Citation
|
| 114 |
+
If you use the ZeroSCROLLS data, **please make sure to cite all of the original dataset papers.** [[bibtex](https://zero-scrolls-tau.s3.us-east-2.amazonaws.com/zero_scrolls_datasets.bib)]
|
| 115 |
+
```
|
| 116 |
+
@misc{shaham2023zeroscrolls,
|
| 117 |
+
title={ZeroSCROLLS: A Zero-Shot Benchmark for Long Text Understanding},
|
| 118 |
+
author={Uri Shaham and Maor Ivgi and Avia Efrat and Jonathan Berant and Omer Levy},
|
| 119 |
+
year={2023},
|
| 120 |
+
eprint={2305.14196},
|
| 121 |
+
archivePrefix={arXiv},
|
| 122 |
+
primaryClass={cs.CL}
|
| 123 |
+
}
|
| 124 |
+
```
|
zero_scrolls.py
CHANGED
|
@@ -7,7 +7,14 @@ import os
|
|
| 7 |
import datasets
|
| 8 |
|
| 9 |
_ZERO_SCROLLS_CITATION = """
|
| 10 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 11 |
Note that each ZeroSCROLLS task has its own citation. Please see the source to
|
| 12 |
get the correct citation for each one.
|
| 13 |
"""
|
|
@@ -19,13 +26,26 @@ https://zero.scrolls-benchmark.com/
|
|
| 19 |
"""
|
| 20 |
|
| 21 |
_SCROLLS_CITATION = """
|
| 22 |
-
@
|
| 23 |
-
|
| 24 |
-
|
| 25 |
-
|
| 26 |
-
|
| 27 |
-
|
| 28 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 29 |
}
|
| 30 |
"""
|
| 31 |
|
|
|
|
| 7 |
import datasets
|
| 8 |
|
| 9 |
_ZERO_SCROLLS_CITATION = """
|
| 10 |
+
@misc{shaham2023zeroscrolls,
|
| 11 |
+
title={ZeroSCROLLS: A Zero-Shot Benchmark for Long Text Understanding},
|
| 12 |
+
author={Uri Shaham and Maor Ivgi and Avia Efrat and Jonathan Berant and Omer Levy},
|
| 13 |
+
year={2023},
|
| 14 |
+
eprint={2305.14196},
|
| 15 |
+
archivePrefix={arXiv},
|
| 16 |
+
primaryClass={cs.CL}
|
| 17 |
+
}
|
| 18 |
Note that each ZeroSCROLLS task has its own citation. Please see the source to
|
| 19 |
get the correct citation for each one.
|
| 20 |
"""
|
|
|
|
| 26 |
"""
|
| 27 |
|
| 28 |
_SCROLLS_CITATION = """
|
| 29 |
+
@inproceedings{shaham-etal-2022-scrolls,
|
| 30 |
+
title = "{SCROLLS}: Standardized {C}ompa{R}ison Over Long Language Sequences",
|
| 31 |
+
author = "Shaham, Uri and
|
| 32 |
+
Segal, Elad and
|
| 33 |
+
Ivgi, Maor and
|
| 34 |
+
Efrat, Avia and
|
| 35 |
+
Yoran, Ori and
|
| 36 |
+
Haviv, Adi and
|
| 37 |
+
Gupta, Ankit and
|
| 38 |
+
Xiong, Wenhan and
|
| 39 |
+
Geva, Mor and
|
| 40 |
+
Berant, Jonathan and
|
| 41 |
+
Levy, Omer",
|
| 42 |
+
booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing",
|
| 43 |
+
month = dec,
|
| 44 |
+
year = "2022",
|
| 45 |
+
address = "Abu Dhabi, United Arab Emirates",
|
| 46 |
+
publisher = "Association for Computational Linguistics",
|
| 47 |
+
url = "https://aclanthology.org/2022.emnlp-main.823",
|
| 48 |
+
pages = "12007--12021",
|
| 49 |
}
|
| 50 |
"""
|
| 51 |
|