metadata
dataset_info:
features:
- name: article_id
dtype: string
- name: abstract_text
dtype: string
- name: token_count
dtype: int64
splits:
- name: train
num_bytes: 150590869
num_examples: 140313
- name: test
num_bytes: 5848235
num_examples: 5481
- name: val
num_bytes: 5748332
num_examples: 5383
download_size: 90308446
dataset_size: 162187436
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
- split: val
path: data/val-*
arXiv Abstract
This dataset is based on the arXiv scientific papers and is used for the text expansion task. (Download raw data here).
I processed the raw data for the article expansion task with extract_arXiv_abstract.py. The processed dataset only contains the article ID and abstract fields, and the abstract length should be 100-300 tokens. The JSON objects are in the following format:
{
'article_id': str,
'abstract_text': List[str],
'token_count': int
}