File size: 1,870 Bytes
42833b0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
86a57c0
 
 
 
 
 
 
 
42833b0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
---
license: mit
task_categories:
- question-answering
language:
- en
size_categories:
- 10K<n<100K
configs:
- config_name: dataset
  data_files: "dataset.json"
- config_name: forget01
  data_files: "forget01.json"
- config_name: forget05
  data_files: "forget05.json"
- config_name: forget10
  data_files: "forget10.json"
- config_name: retain99
  data_files: "retain99.json"
- config_name: retain95
  data_files: "retain95.json"
- config_name: retain90
  data_files: "retain90.json"
- config_name: full
  data_files: "full.json"
- config_name: world_facts
  data_files: world_facts.json
- config_name: real_authors
  data_files: real_authors.json
- config_name: world_facts_perturbed
  data_files: world_facts_perturbed.json
- config_name: real_authors_perturbed
  data_files: real_authors_perturbed.json
---
# CopyrightQA
This dataset is derived from the [NarrativeQA dataset](https://github.com/deepmind/narrativeqa), created by Kocisky et al. (2018). NarrativeQA is a dataset for evaluating reading comprehension and narrative understanding.

This dataset is an extraction of the question answer pairs from the original NarrativeQA dataset. It's original use is to evaluate LLMs forgetting ability using [TOFU](https://locuslab.github.io/tofu/), created by Maini et al. (2024). TOFU is a benchmark for evaluating unlearning performance of LLMs on realistic tasks.

## Citation
If you use this dataset, please also cite the original NarrativeQA dataset:

```bibtex
@article{narrativeqa,
author = {Tom\'a\v s Ko\v cisk\'y and Jonathan Schwarz and Phil Blunsom and
          Chris Dyer and Karl Moritz Hermann and G\'abor Melis and
          Edward Grefenstette},
title = {The {NarrativeQA} Reading Comprehension Challenge},
journal = {Transactions of the Association for Computational Linguistics},
url = {https://TBD},
volume = {TBD},
year = {2018},
pages = {TBD},
}