leilei2026 parquet-converter commited on
Commit
f7b80d1
·
verified ·
0 Parent(s):

Duplicate from reciTAL/mlsum

Browse files

Co-authored-by: Parquet-converter (BOT) <parquet-converter@users.noreply.huggingface.co>

Files changed (3) hide show
  1. .gitattributes +27 -0
  2. README.md +441 -0
  3. mlsum.py +98 -0
.gitattributes ADDED
@@ -0,0 +1,27 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bin.* filter=lfs diff=lfs merge=lfs -text
5
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.model filter=lfs diff=lfs merge=lfs -text
12
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
13
+ *.onnx filter=lfs diff=lfs merge=lfs -text
14
+ *.ot filter=lfs diff=lfs merge=lfs -text
15
+ *.parquet filter=lfs diff=lfs merge=lfs -text
16
+ *.pb filter=lfs diff=lfs merge=lfs -text
17
+ *.pt filter=lfs diff=lfs merge=lfs -text
18
+ *.pth filter=lfs diff=lfs merge=lfs -text
19
+ *.rar filter=lfs diff=lfs merge=lfs -text
20
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
21
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
22
+ *.tflite filter=lfs diff=lfs merge=lfs -text
23
+ *.tgz filter=lfs diff=lfs merge=lfs -text
24
+ *.xz filter=lfs diff=lfs merge=lfs -text
25
+ *.zip filter=lfs diff=lfs merge=lfs -text
26
+ *.zstandard filter=lfs diff=lfs merge=lfs -text
27
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,441 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ annotations_creators:
3
+ - found
4
+ language_creators:
5
+ - found
6
+ language:
7
+ - de
8
+ - es
9
+ - fr
10
+ - ru
11
+ - tr
12
+ license:
13
+ - other
14
+ multilinguality:
15
+ - multilingual
16
+ size_categories:
17
+ - 100K<n<1M
18
+ - 10K<n<100K
19
+ source_datasets:
20
+ - extended|cnn_dailymail
21
+ - original
22
+ task_categories:
23
+ - summarization
24
+ - translation
25
+ - text-classification
26
+ task_ids:
27
+ - news-articles-summarization
28
+ - multi-class-classification
29
+ - multi-label-classification
30
+ - topic-classification
31
+ paperswithcode_id: mlsum
32
+ pretty_name: MLSUM
33
+ dataset_info:
34
+ - config_name: de
35
+ features:
36
+ - name: text
37
+ dtype: string
38
+ - name: summary
39
+ dtype: string
40
+ - name: topic
41
+ dtype: string
42
+ - name: url
43
+ dtype: string
44
+ - name: title
45
+ dtype: string
46
+ - name: date
47
+ dtype: string
48
+ splits:
49
+ - name: train
50
+ num_bytes: 846959840
51
+ num_examples: 220887
52
+ - name: validation
53
+ num_bytes: 47119541
54
+ num_examples: 11394
55
+ - name: test
56
+ num_bytes: 46847612
57
+ num_examples: 10701
58
+ download_size: 1005814154
59
+ dataset_size: 940926993
60
+ - config_name: es
61
+ features:
62
+ - name: text
63
+ dtype: string
64
+ - name: summary
65
+ dtype: string
66
+ - name: topic
67
+ dtype: string
68
+ - name: url
69
+ dtype: string
70
+ - name: title
71
+ dtype: string
72
+ - name: date
73
+ dtype: string
74
+ splits:
75
+ - name: train
76
+ num_bytes: 1214558302
77
+ num_examples: 266367
78
+ - name: validation
79
+ num_bytes: 50643400
80
+ num_examples: 10358
81
+ - name: test
82
+ num_bytes: 71263665
83
+ num_examples: 13920
84
+ download_size: 1456211154
85
+ dataset_size: 1336465367
86
+ - config_name: fr
87
+ features:
88
+ - name: text
89
+ dtype: string
90
+ - name: summary
91
+ dtype: string
92
+ - name: topic
93
+ dtype: string
94
+ - name: url
95
+ dtype: string
96
+ - name: title
97
+ dtype: string
98
+ - name: date
99
+ dtype: string
100
+ splits:
101
+ - name: train
102
+ num_bytes: 1471965014
103
+ num_examples: 392902
104
+ - name: validation
105
+ num_bytes: 70413212
106
+ num_examples: 16059
107
+ - name: test
108
+ num_bytes: 69660288
109
+ num_examples: 15828
110
+ download_size: 1849565564
111
+ dataset_size: 1612038514
112
+ - config_name: ru
113
+ features:
114
+ - name: text
115
+ dtype: string
116
+ - name: summary
117
+ dtype: string
118
+ - name: topic
119
+ dtype: string
120
+ - name: url
121
+ dtype: string
122
+ - name: title
123
+ dtype: string
124
+ - name: date
125
+ dtype: string
126
+ splits:
127
+ - name: train
128
+ num_bytes: 257389497
129
+ num_examples: 25556
130
+ - name: validation
131
+ num_bytes: 9128497
132
+ num_examples: 750
133
+ - name: test
134
+ num_bytes: 9656398
135
+ num_examples: 757
136
+ download_size: 766226107
137
+ dataset_size: 276174392
138
+ - config_name: tu
139
+ features:
140
+ - name: text
141
+ dtype: string
142
+ - name: summary
143
+ dtype: string
144
+ - name: topic
145
+ dtype: string
146
+ - name: url
147
+ dtype: string
148
+ - name: title
149
+ dtype: string
150
+ - name: date
151
+ dtype: string
152
+ splits:
153
+ - name: train
154
+ num_bytes: 641622783
155
+ num_examples: 249277
156
+ - name: validation
157
+ num_bytes: 25530661
158
+ num_examples: 11565
159
+ - name: test
160
+ num_bytes: 27830212
161
+ num_examples: 12775
162
+ download_size: 942308960
163
+ dataset_size: 694983656
164
+ config_names:
165
+ - de
166
+ - es
167
+ - fr
168
+ - ru
169
+ - tu
170
+ ---
171
+
172
+ # Dataset Card for MLSUM
173
+
174
+ ## Table of Contents
175
+ - [Dataset Description](#dataset-description)
176
+ - [Dataset Summary](#dataset-summary)
177
+ - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
178
+ - [Languages](#languages)
179
+ - [Dataset Structure](#dataset-structure)
180
+ - [Data Instances](#data-instances)
181
+ - [Data Fields](#data-fields)
182
+ - [Data Splits](#data-splits)
183
+ - [Dataset Creation](#dataset-creation)
184
+ - [Curation Rationale](#curation-rationale)
185
+ - [Source Data](#source-data)
186
+ - [Annotations](#annotations)
187
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
188
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
189
+ - [Social Impact of Dataset](#social-impact-of-dataset)
190
+ - [Discussion of Biases](#discussion-of-biases)
191
+ - [Other Known Limitations](#other-known-limitations)
192
+ - [Additional Information](#additional-information)
193
+ - [Dataset Curators](#dataset-curators)
194
+ - [Licensing Information](#licensing-information)
195
+ - [Citation Information](#citation-information)
196
+ - [Contributions](#contributions)
197
+
198
+ ## Dataset Description
199
+
200
+ - **Homepage:** []()
201
+ - **Repository:** https://github.com/recitalAI/MLSUM
202
+ - **Paper:** https://www.aclweb.org/anthology/2020.emnlp-main.647/
203
+ - **Point of Contact:** [email](thomas@recital.ai)
204
+ - **Size of downloaded dataset files:** 1.83 GB
205
+ - **Size of the generated dataset:** 4.86 GB
206
+ - **Total amount of disk used:** 6.69 GB
207
+
208
+ ### Dataset Summary
209
+
210
+ We present MLSUM, the first large-scale MultiLingual SUMmarization dataset.
211
+ Obtained from online newspapers, it contains 1.5M+ article/summary pairs in five different languages -- namely, French, German, Spanish, Russian, Turkish.
212
+ Together with English newspapers from the popular CNN/Daily mail dataset, the collected data form a large scale multilingual dataset which can enable new research directions for the text summarization community.
213
+ We report cross-lingual comparative analyses based on state-of-the-art systems.
214
+ These highlight existing biases which motivate the use of a multi-lingual dataset.
215
+
216
+ ### Supported Tasks and Leaderboards
217
+
218
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
219
+
220
+ ### Languages
221
+
222
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
223
+
224
+ ## Dataset Structure
225
+
226
+ ### Data Instances
227
+
228
+ #### de
229
+
230
+ - **Size of downloaded dataset files:** 346.58 MB
231
+ - **Size of the generated dataset:** 940.93 MB
232
+ - **Total amount of disk used:** 1.29 GB
233
+
234
+ An example of 'validation' looks as follows.
235
+ ```
236
+ {
237
+ "date": "01/01/2001",
238
+ "summary": "A text",
239
+ "text": "This is a text",
240
+ "title": "A sample",
241
+ "topic": "football",
242
+ "url": "https://www.google.com"
243
+ }
244
+ ```
245
+
246
+ #### es
247
+
248
+ - **Size of downloaded dataset files:** 513.31 MB
249
+ - **Size of the generated dataset:** 1.34 GB
250
+ - **Total amount of disk used:** 1.85 GB
251
+
252
+ An example of 'validation' looks as follows.
253
+ ```
254
+ {
255
+ "date": "01/01/2001",
256
+ "summary": "A text",
257
+ "text": "This is a text",
258
+ "title": "A sample",
259
+ "topic": "football",
260
+ "url": "https://www.google.com"
261
+ }
262
+ ```
263
+
264
+ #### fr
265
+
266
+ - **Size of downloaded dataset files:** 619.99 MB
267
+ - **Size of the generated dataset:** 1.61 GB
268
+ - **Total amount of disk used:** 2.23 GB
269
+
270
+ An example of 'validation' looks as follows.
271
+ ```
272
+ {
273
+ "date": "01/01/2001",
274
+ "summary": "A text",
275
+ "text": "This is a text",
276
+ "title": "A sample",
277
+ "topic": "football",
278
+ "url": "https://www.google.com"
279
+ }
280
+ ```
281
+
282
+ #### ru
283
+
284
+ - **Size of downloaded dataset files:** 106.22 MB
285
+ - **Size of the generated dataset:** 276.17 MB
286
+ - **Total amount of disk used:** 382.39 MB
287
+
288
+ An example of 'train' looks as follows.
289
+ ```
290
+ {
291
+ "date": "01/01/2001",
292
+ "summary": "A text",
293
+ "text": "This is a text",
294
+ "title": "A sample",
295
+ "topic": "football",
296
+ "url": "https://www.google.com"
297
+ }
298
+ ```
299
+
300
+ #### tu
301
+
302
+ - **Size of downloaded dataset files:** 247.50 MB
303
+ - **Size of the generated dataset:** 694.99 MB
304
+ - **Total amount of disk used:** 942.48 MB
305
+
306
+ An example of 'train' looks as follows.
307
+ ```
308
+ {
309
+ "date": "01/01/2001",
310
+ "summary": "A text",
311
+ "text": "This is a text",
312
+ "title": "A sample",
313
+ "topic": "football",
314
+ "url": "https://www.google.com"
315
+ }
316
+ ```
317
+
318
+ ### Data Fields
319
+
320
+ The data fields are the same among all splits.
321
+
322
+ #### de
323
+ - `text`: a `string` feature.
324
+ - `summary`: a `string` feature.
325
+ - `topic`: a `string` feature.
326
+ - `url`: a `string` feature.
327
+ - `title`: a `string` feature.
328
+ - `date`: a `string` feature.
329
+
330
+ #### es
331
+ - `text`: a `string` feature.
332
+ - `summary`: a `string` feature.
333
+ - `topic`: a `string` feature.
334
+ - `url`: a `string` feature.
335
+ - `title`: a `string` feature.
336
+ - `date`: a `string` feature.
337
+
338
+ #### fr
339
+ - `text`: a `string` feature.
340
+ - `summary`: a `string` feature.
341
+ - `topic`: a `string` feature.
342
+ - `url`: a `string` feature.
343
+ - `title`: a `string` feature.
344
+ - `date`: a `string` feature.
345
+
346
+ #### ru
347
+ - `text`: a `string` feature.
348
+ - `summary`: a `string` feature.
349
+ - `topic`: a `string` feature.
350
+ - `url`: a `string` feature.
351
+ - `title`: a `string` feature.
352
+ - `date`: a `string` feature.
353
+
354
+ #### tu
355
+ - `text`: a `string` feature.
356
+ - `summary`: a `string` feature.
357
+ - `topic`: a `string` feature.
358
+ - `url`: a `string` feature.
359
+ - `title`: a `string` feature.
360
+ - `date`: a `string` feature.
361
+
362
+ ### Data Splits
363
+
364
+ |name|train |validation|test |
365
+ |----|-----:|---------:|----:|
366
+ |de |220887| 11394|10701|
367
+ |es |266367| 10358|13920|
368
+ |fr |392902| 16059|15828|
369
+ |ru | 25556| 750| 757|
370
+ |tu |249277| 11565|12775|
371
+
372
+ ## Dataset Creation
373
+
374
+ ### Curation Rationale
375
+
376
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
377
+
378
+ ### Source Data
379
+
380
+ #### Initial Data Collection and Normalization
381
+
382
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
383
+
384
+ #### Who are the source language producers?
385
+
386
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
387
+
388
+ ### Annotations
389
+
390
+ #### Annotation process
391
+
392
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
393
+
394
+ #### Who are the annotators?
395
+
396
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
397
+
398
+ ### Personal and Sensitive Information
399
+
400
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
401
+
402
+ ## Considerations for Using the Data
403
+
404
+ ### Social Impact of Dataset
405
+
406
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
407
+
408
+ ### Discussion of Biases
409
+
410
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
411
+
412
+ ### Other Known Limitations
413
+
414
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
415
+
416
+ ## Additional Information
417
+
418
+ ### Dataset Curators
419
+
420
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
421
+
422
+ ### Licensing Information
423
+
424
+ Usage of dataset is restricted to non-commercial research purposes only. Copyright belongs to the original copyright holders. See https://github.com/recitalAI/MLSUM#mlsum
425
+
426
+ ### Citation Information
427
+
428
+ ```
429
+ @article{scialom2020mlsum,
430
+ title={MLSUM: The Multilingual Summarization Corpus},
431
+ author={Scialom, Thomas and Dray, Paul-Alexis and Lamprier, Sylvain and Piwowarski, Benjamin and Staiano, Jacopo},
432
+ journal={arXiv preprint arXiv:2004.14900},
433
+ year={2020}
434
+ }
435
+
436
+ ```
437
+
438
+
439
+ ### Contributions
440
+
441
+ Thanks to [@RachelKer](https://github.com/RachelKer), [@albertvillanova](https://github.com/albertvillanova), [@thomwolf](https://github.com/thomwolf) for adding this dataset.
mlsum.py ADDED
@@ -0,0 +1,98 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import json
2
+
3
+ import datasets
4
+
5
+
6
+ _CITATION = """\
7
+ @article{scialom2020mlsum,
8
+ title={MLSUM: The Multilingual Summarization Corpus},
9
+ author={Scialom, Thomas and Dray, Paul-Alexis and Lamprier, Sylvain and Piwowarski, Benjamin and Staiano, Jacopo},
10
+ journal={arXiv preprint arXiv:2004.14900},
11
+ year={2020}
12
+ }
13
+ """
14
+
15
+ _DESCRIPTION = """\
16
+ We present MLSUM, the first large-scale MultiLingual SUMmarization dataset.
17
+ Obtained from online newspapers, it contains 1.5M+ article/summary pairs in five different languages -- namely, French, German, Spanish, Russian, Turkish.
18
+ Together with English newspapers from the popular CNN/Daily mail dataset, the collected data form a large scale multilingual dataset which can enable new research directions for the text summarization community.
19
+ We report cross-lingual comparative analyses based on state-of-the-art systems.
20
+ These highlight existing biases which motivate the use of a multi-lingual dataset.
21
+ """
22
+
23
+ _URL = "https://gitlab.lip6.fr/scialom/mlsum_data/-/raw/master/MLSUM"
24
+ _LANG = ["de", "es", "fr", "ru", "tu"]
25
+
26
+
27
+ class Mlsum(datasets.GeneratorBasedBuilder):
28
+
29
+ BUILDER_CONFIGS = [
30
+ datasets.BuilderConfig(
31
+ name=lang,
32
+ version=datasets.Version("1.0.0"),
33
+ description="",
34
+ )
35
+ for lang in _LANG
36
+ ]
37
+
38
+ def _info(self):
39
+ return datasets.DatasetInfo(
40
+ # This is the description that will appear on the datasets page.
41
+ description=_DESCRIPTION,
42
+ # datasets.features.FeatureConnectors
43
+ features=datasets.Features(
44
+ {
45
+ "text": datasets.Value("string"),
46
+ "summary": datasets.Value("string"),
47
+ "topic": datasets.Value("string"),
48
+ "url": datasets.Value("string"),
49
+ "title": datasets.Value("string"),
50
+ "date": datasets.Value("string")
51
+ # These are the features of your dataset like images, labels ...
52
+ }
53
+ ),
54
+ # If there's a common (input, target) tuple from the features,
55
+ # specify them here. They'll be used if as_supervised=True in
56
+ # builder.as_dataset.
57
+ supervised_keys=None,
58
+ # Homepage of the dataset for documentation
59
+ homepage="",
60
+ citation=_CITATION,
61
+ )
62
+
63
+ def _split_generators(self, dl_manager):
64
+ """Returns SplitGenerators."""
65
+ # dl_manager is a datasets.download.DownloadManager that can be used to
66
+ # download and extract URLs
67
+
68
+ lang = self.config.name
69
+ urls_to_download = {
70
+ "train": f"{_URL}/{lang}_train.jsonl?inline=false",
71
+ "validation": f"{_URL}/{lang}_val.jsonl?inline=false",
72
+ "test": f"{_URL}/{lang}_test.jsonl?inline=false",
73
+ }
74
+ downloaded_files = dl_manager.download(urls_to_download)
75
+
76
+ return [
77
+ datasets.SplitGenerator(
78
+ name=split,
79
+ gen_kwargs={
80
+ "filepath": downloaded_files[split],
81
+ },
82
+ )
83
+ for split in [datasets.Split.TRAIN, datasets.Split.VALIDATION, datasets.Split.TEST]
84
+ ]
85
+
86
+ def _generate_examples(self, filepath):
87
+ """Yields examples."""
88
+ with open(filepath, encoding="utf-8") as f:
89
+ for id_, line in enumerate(f):
90
+ data = json.loads(line)
91
+ yield id_, {
92
+ "text": data["text"],
93
+ "summary": data["summary"],
94
+ "topic": data["topic"],
95
+ "url": data["url"],
96
+ "title": data["title"],
97
+ "date": data["date"],
98
+ }