Datasets:
Commit ·
e038b4f
verified ·
0
Parent(s):
Duplicate from SalmanFaroz/gigaword
Browse filesCo-authored-by: Salman Faroz <SalmanFaroz@users.noreply.huggingface.co>
- .gitattributes +59 -0
- README.md +80 -0
- data/test-00000-of-00001.parquet +3 -0
- data/train-00000-of-00002.parquet +3 -0
- data/train-00001-of-00002.parquet +3 -0
- data/validation-00000-of-00001.parquet +3 -0
.gitattributes
ADDED
|
@@ -0,0 +1,59 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
*.7z filter=lfs diff=lfs merge=lfs -text
|
| 2 |
+
*.arrow filter=lfs diff=lfs merge=lfs -text
|
| 3 |
+
*.bin filter=lfs diff=lfs merge=lfs -text
|
| 4 |
+
*.bz2 filter=lfs diff=lfs merge=lfs -text
|
| 5 |
+
*.ckpt filter=lfs diff=lfs merge=lfs -text
|
| 6 |
+
*.ftz filter=lfs diff=lfs merge=lfs -text
|
| 7 |
+
*.gz filter=lfs diff=lfs merge=lfs -text
|
| 8 |
+
*.h5 filter=lfs diff=lfs merge=lfs -text
|
| 9 |
+
*.joblib filter=lfs diff=lfs merge=lfs -text
|
| 10 |
+
*.lfs.* filter=lfs diff=lfs merge=lfs -text
|
| 11 |
+
*.lz4 filter=lfs diff=lfs merge=lfs -text
|
| 12 |
+
*.mds filter=lfs diff=lfs merge=lfs -text
|
| 13 |
+
*.mlmodel filter=lfs diff=lfs merge=lfs -text
|
| 14 |
+
*.model filter=lfs diff=lfs merge=lfs -text
|
| 15 |
+
*.msgpack filter=lfs diff=lfs merge=lfs -text
|
| 16 |
+
*.npy filter=lfs diff=lfs merge=lfs -text
|
| 17 |
+
*.npz filter=lfs diff=lfs merge=lfs -text
|
| 18 |
+
*.onnx filter=lfs diff=lfs merge=lfs -text
|
| 19 |
+
*.ot filter=lfs diff=lfs merge=lfs -text
|
| 20 |
+
*.parquet filter=lfs diff=lfs merge=lfs -text
|
| 21 |
+
*.pb filter=lfs diff=lfs merge=lfs -text
|
| 22 |
+
*.pickle filter=lfs diff=lfs merge=lfs -text
|
| 23 |
+
*.pkl filter=lfs diff=lfs merge=lfs -text
|
| 24 |
+
*.pt filter=lfs diff=lfs merge=lfs -text
|
| 25 |
+
*.pth filter=lfs diff=lfs merge=lfs -text
|
| 26 |
+
*.rar filter=lfs diff=lfs merge=lfs -text
|
| 27 |
+
*.safetensors filter=lfs diff=lfs merge=lfs -text
|
| 28 |
+
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
| 29 |
+
*.tar.* filter=lfs diff=lfs merge=lfs -text
|
| 30 |
+
*.tar filter=lfs diff=lfs merge=lfs -text
|
| 31 |
+
*.tflite filter=lfs diff=lfs merge=lfs -text
|
| 32 |
+
*.tgz filter=lfs diff=lfs merge=lfs -text
|
| 33 |
+
*.wasm filter=lfs diff=lfs merge=lfs -text
|
| 34 |
+
*.xz filter=lfs diff=lfs merge=lfs -text
|
| 35 |
+
*.zip filter=lfs diff=lfs merge=lfs -text
|
| 36 |
+
*.zst filter=lfs diff=lfs merge=lfs -text
|
| 37 |
+
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
| 38 |
+
# Audio files - uncompressed
|
| 39 |
+
*.pcm filter=lfs diff=lfs merge=lfs -text
|
| 40 |
+
*.sam filter=lfs diff=lfs merge=lfs -text
|
| 41 |
+
*.raw filter=lfs diff=lfs merge=lfs -text
|
| 42 |
+
# Audio files - compressed
|
| 43 |
+
*.aac filter=lfs diff=lfs merge=lfs -text
|
| 44 |
+
*.flac filter=lfs diff=lfs merge=lfs -text
|
| 45 |
+
*.mp3 filter=lfs diff=lfs merge=lfs -text
|
| 46 |
+
*.ogg filter=lfs diff=lfs merge=lfs -text
|
| 47 |
+
*.wav filter=lfs diff=lfs merge=lfs -text
|
| 48 |
+
# Image files - uncompressed
|
| 49 |
+
*.bmp filter=lfs diff=lfs merge=lfs -text
|
| 50 |
+
*.gif filter=lfs diff=lfs merge=lfs -text
|
| 51 |
+
*.png filter=lfs diff=lfs merge=lfs -text
|
| 52 |
+
*.tiff filter=lfs diff=lfs merge=lfs -text
|
| 53 |
+
# Image files - compressed
|
| 54 |
+
*.jpg filter=lfs diff=lfs merge=lfs -text
|
| 55 |
+
*.jpeg filter=lfs diff=lfs merge=lfs -text
|
| 56 |
+
*.webp filter=lfs diff=lfs merge=lfs -text
|
| 57 |
+
# Video files - compressed
|
| 58 |
+
*.mp4 filter=lfs diff=lfs merge=lfs -text
|
| 59 |
+
*.webm filter=lfs diff=lfs merge=lfs -text
|
README.md
ADDED
|
@@ -0,0 +1,80 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
license: apache-2.0
|
| 3 |
+
task_categories:
|
| 4 |
+
- summarization
|
| 5 |
+
language:
|
| 6 |
+
- en
|
| 7 |
+
size_categories:
|
| 8 |
+
- 1M<n<10M
|
| 9 |
+
configs:
|
| 10 |
+
- config_name: default
|
| 11 |
+
data_files:
|
| 12 |
+
- split: train
|
| 13 |
+
path: data/train-*
|
| 14 |
+
- split: validation
|
| 15 |
+
path: data/validation-*
|
| 16 |
+
- split: test
|
| 17 |
+
path: data/test-*
|
| 18 |
+
---
|
| 19 |
+
|
| 20 |
+
# Gigaword (Repackaged)
|
| 21 |
+
|
| 22 |
+
Putting it here for better accessibility.
|
| 23 |
+
|
| 24 |
+
## Description
|
| 25 |
+
|
| 26 |
+
Headline-generation on a corpus of article pairs from Gigaword consisting of around 4 million articles. Use the 'org_data' provided by https://github.com/microsoft/unilm/ which is identical to https://github.com/harvardnlp/sent-summary but with better format.
|
| 27 |
+
|
| 28 |
+
There are two features:
|
| 29 |
+
- **document**: article.
|
| 30 |
+
- **summary**: headline.
|
| 31 |
+
|
| 32 |
+
**Homepage:** https://github.com/harvardnlp/sent-summary
|
| 33 |
+
|
| 34 |
+
**Original Source code:** tfds.summarization.Gigaword
|
| 35 |
+
|
| 36 |
+
## Dataset Statistics
|
| 37 |
+
|
| 38 |
+
- **Versions:** 1.2.0 (default)
|
| 39 |
+
- **Download size:** 551.61 MiB
|
| 40 |
+
- **Dataset size:** 1.02 GiB
|
| 41 |
+
|
| 42 |
+
### Splits
|
| 43 |
+
|
| 44 |
+
| Split | Examples |
|
| 45 |
+
| :--- | :--- |
|
| 46 |
+
| 'test' | 1,951 |
|
| 47 |
+
| 'train' | 3,803,957 |
|
| 48 |
+
| 'validation' | 189,651 |
|
| 49 |
+
|
| 50 |
+
## Feature Structure
|
| 51 |
+
|
| 52 |
+
```python
|
| 53 |
+
FeaturesDict({{
|
| 54 |
+
'document': Text(shape=(), dtype=string),
|
| 55 |
+
'summary': Text(shape=(), dtype=string),
|
| 56 |
+
}})
|
| 57 |
+
```
|
| 58 |
+
Citation
|
| 59 |
+
|
| 60 |
+
```
|
| 61 |
+
@article{{graff2003english,
|
| 62 |
+
title={{English gigaword}},
|
| 63 |
+
author={{Graff, David and Kong, Junbo and Chen, Ke and Maeda, Kazuaki}},
|
| 64 |
+
journal={{Linguistic Data Consortium, Philadelphia}},
|
| 65 |
+
volume={{4}},
|
| 66 |
+
number={{1}},
|
| 67 |
+
pages={{34}},
|
| 68 |
+
year={{2003}}
|
| 69 |
+
}}
|
| 70 |
+
|
| 71 |
+
@article{{Rush_2015,
|
| 72 |
+
title={{A Neural Attention Model for Abstractive Sentence Summarization}},
|
| 73 |
+
url={{[http://dx.doi.org/10.18653/v1/D15-1044](http://dx.doi.org/10.18653/v1/D15-1044)}},
|
| 74 |
+
DOI={{10.18653/v1/d15-1044}},
|
| 75 |
+
journal={{Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing}},
|
| 76 |
+
publisher={{Association for Computational Linguistics}},
|
| 77 |
+
author={{Rush, Alexander M. and Chopra, Sumit and Weston, Jason}},
|
| 78 |
+
year={{2015}}
|
| 79 |
+
}}
|
| 80 |
+
Original TFDS Catalog: https://www.tensorflow.org/datasets/catalog/gigaword
|
data/test-00000-of-00001.parquet
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:1ffa246329ec6c3e3fce1bdbe426ecd301f94f9b971b27d9d7f4f18288ae3c7f
|
| 3 |
+
size 299399
|
data/train-00000-of-00002.parquet
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:49976de22e57dcb43d91b8061144bae614a014edf08b91b748d5533db7315e95
|
| 3 |
+
size 295657484
|
data/train-00001-of-00002.parquet
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:f861eddbd3ee4202de93285e3cb663f34fb84e9675dc9fcb053b2535756fa129
|
| 3 |
+
size 295790707
|
data/validation-00000-of-00001.parquet
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:4849ae37abcb7202215f8f1b7f53b1b80c64babab89e5e3c5c8f84c752b31d36
|
| 3 |
+
size 29388959
|