Datasets:
Tanushreeeeee
/

Languages:
English
ArXiv:
License:
Tanushreeeeee parquet-converter commited on
Commit
f3b7166
·
verified ·
0 Parent(s):

Duplicate from deepmind/pg19

Browse files

Co-authored-by: Parquet-converter (BOT) <parquet-converter@users.noreply.huggingface.co>

Files changed (6) hide show
  1. .gitattributes +27 -0
  2. README.md +211 -0
  3. data/test_files.txt +100 -0
  4. data/train_files.txt +0 -0
  5. data/validation_files.txt +50 -0
  6. pg19.py +146 -0
.gitattributes ADDED
@@ -0,0 +1,27 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bin.* filter=lfs diff=lfs merge=lfs -text
5
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.model filter=lfs diff=lfs merge=lfs -text
12
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
13
+ *.onnx filter=lfs diff=lfs merge=lfs -text
14
+ *.ot filter=lfs diff=lfs merge=lfs -text
15
+ *.parquet filter=lfs diff=lfs merge=lfs -text
16
+ *.pb filter=lfs diff=lfs merge=lfs -text
17
+ *.pt filter=lfs diff=lfs merge=lfs -text
18
+ *.pth filter=lfs diff=lfs merge=lfs -text
19
+ *.rar filter=lfs diff=lfs merge=lfs -text
20
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
21
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
22
+ *.tflite filter=lfs diff=lfs merge=lfs -text
23
+ *.tgz filter=lfs diff=lfs merge=lfs -text
24
+ *.xz filter=lfs diff=lfs merge=lfs -text
25
+ *.zip filter=lfs diff=lfs merge=lfs -text
26
+ *.zstandard filter=lfs diff=lfs merge=lfs -text
27
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,211 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ annotations_creators:
3
+ - expert-generated
4
+ language_creators:
5
+ - expert-generated
6
+ language:
7
+ - en
8
+ license:
9
+ - apache-2.0
10
+ multilinguality:
11
+ - monolingual
12
+ size_categories:
13
+ - 10K<n<100K
14
+ source_datasets:
15
+ - original
16
+ task_categories:
17
+ - text-generation
18
+ task_ids:
19
+ - language-modeling
20
+ paperswithcode_id: pg-19
21
+ pretty_name: PG-19
22
+ dataset_info:
23
+ features:
24
+ - name: short_book_title
25
+ dtype: string
26
+ - name: publication_date
27
+ dtype: int32
28
+ - name: url
29
+ dtype: string
30
+ - name: text
31
+ dtype: string
32
+ splits:
33
+ - name: train
34
+ num_bytes: 11453688452
35
+ num_examples: 28602
36
+ - name: validation
37
+ num_bytes: 17402295
38
+ num_examples: 50
39
+ - name: test
40
+ num_bytes: 40482852
41
+ num_examples: 100
42
+ download_size: 11740397875
43
+ dataset_size: 11511573599
44
+ ---
45
+
46
+ # Dataset Card for "pg19"
47
+
48
+ ## Table of Contents
49
+ - [Dataset Description](#dataset-description)
50
+ - [Dataset Summary](#dataset-summary)
51
+ - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
52
+ - [Languages](#languages)
53
+ - [Dataset Structure](#dataset-structure)
54
+ - [Data Instances](#data-instances)
55
+ - [Data Fields](#data-fields)
56
+ - [Data Splits](#data-splits)
57
+ - [Dataset Creation](#dataset-creation)
58
+ - [Curation Rationale](#curation-rationale)
59
+ - [Source Data](#source-data)
60
+ - [Annotations](#annotations)
61
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
62
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
63
+ - [Social Impact of Dataset](#social-impact-of-dataset)
64
+ - [Discussion of Biases](#discussion-of-biases)
65
+ - [Other Known Limitations](#other-known-limitations)
66
+ - [Additional Information](#additional-information)
67
+ - [Dataset Curators](#dataset-curators)
68
+ - [Licensing Information](#licensing-information)
69
+ - [Citation Information](#citation-information)
70
+ - [Contributions](#contributions)
71
+
72
+ ## Dataset Description
73
+
74
+ - **Homepage:** [https://github.com/deepmind/pg19](https://github.com/deepmind/pg19)
75
+ - **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
76
+ - **Paper:** [Compressive Transformers for Long-Range Sequence Modelling](https://arxiv.org/abs/1911.05507)
77
+ - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
78
+ - **Size of downloaded dataset files:** 11.74 GB
79
+ - **Size of the generated dataset:** 11.51 GB
80
+ - **Total amount of disk used:** 23.25 GB
81
+
82
+ ### Dataset Summary
83
+
84
+ This repository contains the PG-19 language modeling benchmark.
85
+ It includes a set of books extracted from the Project Gutenberg books library, that were published before 1919.
86
+ It also contains metadata of book titles and publication dates.
87
+
88
+ PG-19 is over double the size of the Billion Word benchmark and contains documents that are 20X longer, on average, than the WikiText long-range language modelling benchmark.
89
+ Books are partitioned into a train, validation, and test set. Book metadata is stored in metadata.csv which contains (book_id, short_book_title, publication_date).
90
+
91
+ Unlike prior benchmarks, we do not constrain the vocabulary size --- i.e. mapping rare words to an UNK token --- but instead release the data as an open-vocabulary benchmark. The only processing of the text that has been applied is the removal of boilerplate license text, and the mapping of offensive discriminatory words as specified by Ofcom to placeholder tokens. Users are free to model the data at the character-level, subword-level, or via any mechanism that can model an arbitrary string of text.
92
+ To compare models we propose to continue measuring the word-level perplexity, by calculating the total likelihood of the dataset (via any chosen subword vocabulary or character-based scheme) divided by the number of tokens --- specified below in the dataset statistics table.
93
+ One could use this dataset for benchmarking long-range language models, or use it to pre-train for other natural language processing tasks which require long-range reasoning, such as LAMBADA or NarrativeQA. We would not recommend using this dataset to train a general-purpose language model, e.g. for applications to a production-system dialogue agent, due to the dated linguistic style of old texts and the inherent biases present in historical writing.
94
+
95
+ ### Supported Tasks and Leaderboards
96
+
97
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
98
+
99
+ ### Languages
100
+
101
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
102
+
103
+ ## Dataset Structure
104
+
105
+ ### Data Instances
106
+
107
+ #### default
108
+
109
+ - **Size of downloaded dataset files:** 11.74 GB
110
+ - **Size of the generated dataset:** 11.51 GB
111
+ - **Total amount of disk used:** 23.25 GB
112
+
113
+ An example of 'train' looks as follows.
114
+ ```
115
+ This example was too long and was cropped:
116
+
117
+ {
118
+ "publication_date": 1907,
119
+ "short_book_title": "La Fiammetta by Giovanni Boccaccio",
120
+ "text": "\"\\n\\n\\n\\nProduced by Ted Garvin, Dave Morgan and PG Distributed Proofreaders\\n\\n\\n\\n\\nLA FIAMMETTA\\n\\nBY\\n\\nGIOVANNI BOCCACCIO\\n...",
121
+ "url": "http://www.gutenberg.org/ebooks/10006"
122
+ }
123
+ ```
124
+
125
+ ### Data Fields
126
+
127
+ The data fields are the same among all splits.
128
+
129
+ #### default
130
+ - `short_book_title`: a `string` feature.
131
+ - `publication_date`: a `int32` feature.
132
+ - `url`: a `string` feature.
133
+ - `text`: a `string` feature.
134
+
135
+ ### Data Splits
136
+
137
+ | name |train|validation|test|
138
+ |-------|----:|---------:|---:|
139
+ |default|28602| 50| 100|
140
+
141
+ ## Dataset Creation
142
+
143
+ ### Curation Rationale
144
+
145
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
146
+
147
+ ### Source Data
148
+
149
+ #### Initial Data Collection and Normalization
150
+
151
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
152
+
153
+ #### Who are the source language producers?
154
+
155
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
156
+
157
+ ### Annotations
158
+
159
+ #### Annotation process
160
+
161
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
162
+
163
+ #### Who are the annotators?
164
+
165
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
166
+
167
+ ### Personal and Sensitive Information
168
+
169
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
170
+
171
+ ## Considerations for Using the Data
172
+
173
+ ### Social Impact of Dataset
174
+
175
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
176
+
177
+ ### Discussion of Biases
178
+
179
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
180
+
181
+ ### Other Known Limitations
182
+
183
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
184
+
185
+ ## Additional Information
186
+
187
+ ### Dataset Curators
188
+
189
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
190
+
191
+ ### Licensing Information
192
+
193
+ The dataset is licensed under [Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0.html).
194
+
195
+ ### Citation Information
196
+
197
+ ```
198
+ @article{raecompressive2019,
199
+ author = {Rae, Jack W and Potapenko, Anna and Jayakumar, Siddhant M and
200
+ Hillier, Chloe and Lillicrap, Timothy P},
201
+ title = {Compressive Transformers for Long-Range Sequence Modelling},
202
+ journal = {arXiv preprint},
203
+ url = {https://arxiv.org/abs/1911.05507},
204
+ year = {2019},
205
+ }
206
+ ```
207
+
208
+
209
+ ### Contributions
210
+
211
+ Thanks to [@thomwolf](https://github.com/thomwolf), [@lewtun](https://github.com/lewtun), [@lucidrains](https://github.com/lucidrains), [@lhoestq](https://github.com/lhoestq) for adding this dataset.
data/test_files.txt ADDED
@@ -0,0 +1,100 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ test/10146.txt
2
+ test/10321.txt
3
+ test/10356.txt
4
+ test/10762.txt
5
+ test/12204.txt
6
+ test/15562.txt
7
+ test/22424.txt
8
+ test/24553.txt
9
+ test/2544.txt
10
+ test/25646.txt
11
+ test/25773.txt
12
+ test/25830.txt
13
+ test/26183.txt
14
+ test/26239.txt
15
+ test/26493.txt
16
+ test/26618.txt
17
+ test/27454.txt
18
+ test/28444.txt
19
+ test/28988.txt
20
+ test/29594.txt
21
+ test/29973.txt
22
+ test/30312.txt
23
+ test/30752.txt
24
+ test/30754.txt
25
+ test/30909.txt
26
+ test/30981.txt
27
+ test/31065.txt
28
+ test/3129.txt
29
+ test/31974.txt
30
+ test/3247.txt
31
+ test/32761.txt
32
+ test/3340.txt
33
+ test/33426.txt
34
+ test/33756.txt
35
+ test/34467.txt
36
+ test/35205.txt
37
+ test/35246.txt
38
+ test/3608.txt
39
+ test/36256.txt
40
+ test/37006.txt
41
+ test/37328.txt
42
+ test/37403.txt
43
+ test/37443.txt
44
+ test/3754.txt
45
+ test/37702.txt
46
+ test/38106.txt
47
+ test/3890.txt
48
+ test/38929.txt
49
+ test/38955.txt
50
+ test/4047.txt
51
+ test/40579.txt
52
+ test/40700.txt
53
+ test/4128.txt
54
+ test/41603.txt
55
+ test/41607.txt
56
+ test/42081.txt
57
+ test/42655.txt
58
+ test/43536.txt
59
+ test/43845.txt
60
+ test/44099.txt
61
+ test/44557.txt
62
+ test/45313.txt
63
+ test/45881.txt
64
+ test/45888.txt
65
+ test/46915.txt
66
+ test/47068.txt
67
+ test/47558.txt
68
+ test/47581.txt
69
+ test/47676.txt
70
+ test/48693.txt
71
+ test/49078.txt
72
+ test/49529.txt
73
+ test/49596.txt
74
+ test/50287.txt
75
+ test/51410.txt
76
+ test/53345.txt
77
+ test/5396.txt
78
+ test/54537.txt
79
+ test/54624.txt
80
+ test/55339.txt
81
+ test/55871.txt
82
+ test/56410.txt
83
+ test/5734.txt
84
+ test/5770.txt
85
+ test/57791.txt
86
+ test/58473.txt
87
+ test/58553.txt
88
+ test/58598.txt
89
+ test/5956.txt
90
+ test/5962.txt
91
+ test/6412.txt
92
+ test/6941.txt
93
+ test/7412.txt
94
+ test/7987.txt
95
+ test/8197.txt
96
+ test/8559.txt
97
+ test/860.txt
98
+ test/8788.txt
99
+ test/9315.txt
100
+ test/9931.txt
data/train_files.txt ADDED
The diff for this file is too large to render. See raw diff
 
data/validation_files.txt ADDED
@@ -0,0 +1,50 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ validation/1022.txt
2
+ validation/11155.txt
3
+ validation/13089.txt
4
+ validation/16959.txt
5
+ validation/1925.txt
6
+ validation/2383.txt
7
+ validation/23956.txt
8
+ validation/24360.txt
9
+ validation/25066.txt
10
+ validation/27688.txt
11
+ validation/28213.txt
12
+ validation/28776.txt
13
+ validation/29981.txt
14
+ validation/32629.txt
15
+ validation/34016.txt
16
+ validation/34056.txt
17
+ validation/34100.txt
18
+ validation/356.txt
19
+ validation/35816.txt
20
+ validation/36402.txt
21
+ validation/37833.txt
22
+ validation/38214.txt
23
+ validation/38403.txt
24
+ validation/4024.txt
25
+ validation/41074.txt
26
+ validation/42067.txt
27
+ validation/42142.txt
28
+ validation/42306.txt
29
+ validation/43423.txt
30
+ validation/44896.txt
31
+ validation/44912.txt
32
+ validation/4533.txt
33
+ validation/48089.txt
34
+ validation/48461.txt
35
+ validation/48677.txt
36
+ validation/49091.txt
37
+ validation/50355.txt
38
+ validation/51859.txt
39
+ validation/5195.txt
40
+ validation/5321.txt
41
+ validation/53682.txt
42
+ validation/54098.txt
43
+ validation/555.txt
44
+ validation/55658.txt
45
+ validation/56719.txt
46
+ validation/57843.txt
47
+ validation/58093.txt
48
+ validation/6404.txt
49
+ validation/7510.txt
50
+ validation/8545.txt
pg19.py ADDED
@@ -0,0 +1,146 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """PG-19 language modeling benchmark - a set of books extracted from the Project Gutenberg books library"""
2
+
3
+
4
+ import csv
5
+ import os
6
+
7
+ import datasets
8
+
9
+
10
+ # TODO(pg19): BibTeX citation
11
+ _CITATION = """\
12
+ @article{raecompressive2019,
13
+ author = {Rae, Jack W and Potapenko, Anna and Jayakumar, Siddhant M and
14
+ Hillier, Chloe and Lillicrap, Timothy P},
15
+ title = {Compressive Transformers for Long-Range Sequence Modelling},
16
+ journal = {arXiv preprint},
17
+ url = {https://arxiv.org/abs/1911.05507},
18
+ year = {2019},
19
+ }
20
+
21
+ """
22
+
23
+ # TODO(pg19):
24
+ _DESCRIPTION = """\
25
+ This repository contains the PG-19 language modeling benchmark.
26
+ It includes a set of books extracted from the Project Gutenberg books library, that were published before 1919.
27
+ It also contains metadata of book titles and publication dates.
28
+
29
+ PG-19 is over double the size of the Billion Word benchmark and contains documents that are 20X longer, on average, than the WikiText long-range language modelling benchmark.
30
+ Books are partitioned into a train, validation, and test set. Book metadata is stored in metadata.csv which contains (book_id, short_book_title, publication_date).
31
+
32
+ Unlike prior benchmarks, we do not constrain the vocabulary size --- i.e. mapping rare words to an UNK token --- but instead release the data as an open-vocabulary benchmark. The only processing of the text that has been applied is the removal of boilerplate license text, and the mapping of offensive discriminatory words as specified by Ofcom to placeholder tokens. Users are free to model the data at the character-level, subword-level, or via any mechanism that can model an arbitrary string of text.
33
+ To compare models we propose to continue measuring the word-level perplexity, by calculating the total likelihood of the dataset (via any chosen subword vocabulary or character-based scheme) divided by the number of tokens --- specified below in the dataset statistics table.
34
+ One could use this dataset for benchmarking long-range language models, or use it to pre-train for other natural language processing tasks which require long-range reasoning, such as LAMBADA or NarrativeQA. We would not recommend using this dataset to train a general-purpose language model, e.g. for applications to a production-system dialogue agent, due to the dated linguistic style of old texts and the inherent biases present in historical writing.
35
+ """
36
+
37
+
38
+ _SPLIT_FILES_PATH = "data/{split}_files.txt"
39
+ _ASSET_ROOT_URL = "https://storage.googleapis.com/deepmind-gutenberg/"
40
+ _METADATA_URL = _ASSET_ROOT_URL + "metadata.csv"
41
+
42
+
43
+ def flat_map(fn, arr):
44
+ return [el for sub_arr in map(fn, arr) for el in sub_arr]
45
+
46
+
47
+ class Pg19(datasets.GeneratorBasedBuilder):
48
+ """PG-19 dataset - books as plain text extracted from the Project Gutenberg library"""
49
+
50
+ # TODO(pg19): Set up version.
51
+ VERSION = datasets.Version("0.1.0")
52
+
53
+ def _info(self):
54
+ # TODO(pg19): Specifies the datasets.DatasetInfo object
55
+ return datasets.DatasetInfo(
56
+ # This is the description that will appear on the datasets page.
57
+ description=_DESCRIPTION,
58
+ # datasets.features.FeatureConnectors
59
+ features=datasets.Features(
60
+ {
61
+ "short_book_title": datasets.Value("string"),
62
+ "publication_date": datasets.Value("int32"),
63
+ "url": datasets.Value("string"),
64
+ "text": datasets.Value("string"),
65
+ # These are the features of your dataset like images, labels ...
66
+ }
67
+ ),
68
+ # If there's a common (input, target) tuple from the features,
69
+ # specify them here. They'll be used if as_supervised=True in
70
+ # builder.as_dataset.
71
+ supervised_keys=None,
72
+ # Homepage of the dataset for documentation
73
+ homepage="https://github.com/deepmind/pg19",
74
+ citation=_CITATION,
75
+ )
76
+
77
+ def _split_generators(self, dl_manager):
78
+ """Returns SplitGenerators."""
79
+ splits = ["train", "validation", "test"]
80
+ files = dl_manager.download({split: _SPLIT_FILES_PATH.format(split=split) for split in splits})
81
+
82
+ for split, names_file in list(files.items()):
83
+ with open(names_file, encoding="utf-8") as f:
84
+ split_files = f.read().splitlines()
85
+ split_files = sorted(split_files)
86
+ split_files = {
87
+ os.path.splitext(os.path.basename(file))[0]: _ASSET_ROOT_URL + file
88
+ for file in split_files
89
+ }
90
+ files[split] = split_files
91
+
92
+ metadata = dl_manager.download(_METADATA_URL)
93
+ downloaded_files = dl_manager.download(files)
94
+ return [
95
+ datasets.SplitGenerator(
96
+ name=datasets.Split.TRAIN,
97
+ gen_kwargs={
98
+ "ids": list(downloaded_files["train"]),
99
+ "metadata": metadata,
100
+ "files": downloaded_files["train"],
101
+ },
102
+ ),
103
+ datasets.SplitGenerator(
104
+ name=datasets.Split.VALIDATION,
105
+ gen_kwargs={
106
+ "ids": list(downloaded_files["validation"]),
107
+ "metadata": metadata,
108
+ "files": downloaded_files["validation"],
109
+ },
110
+ ),
111
+ datasets.SplitGenerator(
112
+ name=datasets.Split.TEST,
113
+ gen_kwargs={
114
+ "ids": list(downloaded_files["test"]),
115
+ "metadata": metadata,
116
+ "files": downloaded_files["test"],
117
+ },
118
+ ),
119
+ ]
120
+
121
+ def _generate_examples(self, ids, metadata, files):
122
+ """Yields examples."""
123
+ # TODO(pg19): Yields (key, example) tuples from the dataset
124
+
125
+ with open(metadata, encoding="utf-8") as f:
126
+ reader = csv.DictReader(f, fieldnames=["_id", "short_book_title", "publication_date", "url"])
127
+ id2metadata = {row["_id"]: row for row in reader}
128
+
129
+ for _id in ids:
130
+ data = id2metadata[_id]
131
+ file = files[_id]
132
+
133
+ with open(file, encoding="utf-8") as f:
134
+ text = f.read()
135
+
136
+ _id = data["_id"]
137
+ short_book_title = data["short_book_title"]
138
+ publication_date = int(data["publication_date"])
139
+ url = data["url"]
140
+
141
+ yield _id, {
142
+ "short_book_title": short_book_title,
143
+ "publication_date": publication_date,
144
+ "url": url,
145
+ "text": text,
146
+ }