Datasets:
Upload README.md with huggingface_hub
Browse files
README.md
CHANGED
|
@@ -1,19 +1,86 @@
|
|
| 1 |
---
|
| 2 |
-
|
| 3 |
-
|
| 4 |
-
|
| 5 |
-
|
| 6 |
-
|
| 7 |
-
|
| 8 |
-
|
| 9 |
-
|
| 10 |
-
num_bytes: 668642138
|
| 11 |
-
num_examples: 20000
|
| 12 |
-
download_size: 320970591
|
| 13 |
-
dataset_size: 668642138
|
| 14 |
-
configs:
|
| 15 |
-
- config_name: default
|
| 16 |
-
data_files:
|
| 17 |
-
- split: train
|
| 18 |
-
path: data/train-*
|
| 19 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
---
|
| 2 |
+
license: mit
|
| 3 |
+
task_categories:
|
| 4 |
+
- text-generation
|
| 5 |
+
- summarization
|
| 6 |
+
language:
|
| 7 |
+
- en
|
| 8 |
+
size_categories:
|
| 9 |
+
- 10K<n<100K
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 10 |
---
|
| 11 |
+
|
| 12 |
+
# ArXiv Summarization Dataset - 20K Preprocessed
|
| 13 |
+
|
| 14 |
+
A preprocessed dataset of 20,000 ArXiv papers with their full articles and abstracts, designed for abstract generation and summarization tasks.
|
| 15 |
+
|
| 16 |
+
## Dataset Description
|
| 17 |
+
|
| 18 |
+
This dataset contains 20,000 ArXiv papers that have been filtered and preprocessed to ensure quality for training summarization models. Each example contains the full article text and its corresponding abstract.
|
| 19 |
+
|
| 20 |
+
## Dataset Structure
|
| 21 |
+
|
| 22 |
+
The dataset has the following structure:
|
| 23 |
+
|
| 24 |
+
- **article**: The full text of the ArXiv paper
|
| 25 |
+
- **abstract**: The abstract/summary of the paper
|
| 26 |
+
|
| 27 |
+
## Dataset Statistics
|
| 28 |
+
|
| 29 |
+
- **Total Papers**: 20,000
|
| 30 |
+
- **Article Word Count**:
|
| 31 |
+
- Mean: 5,875.84 words
|
| 32 |
+
- Median: 5,217 words
|
| 33 |
+
- Range: 2,000 - 14,998 words
|
| 34 |
+
- **Abstract Word Count**:
|
| 35 |
+
- Mean: 179.86 words
|
| 36 |
+
- Median: 166 words
|
| 37 |
+
- Range: 50 - 500 words
|
| 38 |
+
- **Length Ratio** (article/abstract):
|
| 39 |
+
- Mean: 36.00
|
| 40 |
+
- Median: 32.43
|
| 41 |
+
- Range: 5.01 - 99.98
|
| 42 |
+
|
| 43 |
+
## Filtering Criteria
|
| 44 |
+
|
| 45 |
+
The dataset was filtered using the following criteria:
|
| 46 |
+
- Minimum article words: 2,000
|
| 47 |
+
- Maximum article words: 15,000
|
| 48 |
+
- Minimum abstract words: 50
|
| 49 |
+
- Maximum abstract words: 500
|
| 50 |
+
- Minimum length ratio (article/abstract): 5
|
| 51 |
+
- Maximum length ratio (article/abstract): 100
|
| 52 |
+
|
| 53 |
+
## Usage
|
| 54 |
+
|
| 55 |
+
```python
|
| 56 |
+
from datasets import load_dataset
|
| 57 |
+
|
| 58 |
+
# Load the dataset
|
| 59 |
+
dataset = load_dataset("yilmazzey/arxiv_summarization_20k_preprocessed")
|
| 60 |
+
|
| 61 |
+
# Access the data
|
| 62 |
+
print(dataset['train'][0])
|
| 63 |
+
# Output: {'article': '...', 'abstract': '...'}
|
| 64 |
+
```
|
| 65 |
+
|
| 66 |
+
## Use Cases
|
| 67 |
+
|
| 68 |
+
This dataset is suitable for:
|
| 69 |
+
- Training abstract generation models
|
| 70 |
+
- Fine-tuning language models for summarization
|
| 71 |
+
- Research on long-form text summarization
|
| 72 |
+
- Evaluating summarization metrics (ROUGE, BLEU, etc.)
|
| 73 |
+
|
| 74 |
+
## Citation
|
| 75 |
+
|
| 76 |
+
If you use this dataset, please cite:
|
| 77 |
+
|
| 78 |
+
```bibtex
|
| 79 |
+
@dataset{arxiv_summarization_20k_preprocessed,
|
| 80 |
+
title={ArXiv Summarization Dataset - 20K Preprocessed},
|
| 81 |
+
author={Yilmaz, Zeynep},
|
| 82 |
+
year={2024},
|
| 83 |
+
url={https://huggingface.co/datasets/yilmazzey/arxiv_summarization_20k_preprocessed}
|
| 84 |
+
}
|
| 85 |
+
```
|
| 86 |
+
|