File size: 1,129 Bytes
0082488
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
dc480b8
 
 
 
 
eac142a
dc480b8
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
---
dataset_info:
  features:
  - name: article_id
    dtype: string
  - name: abstract_text
    dtype: string
  - name: token_count
    dtype: int64
  splits:
  - name: train
    num_bytes: 150590869
    num_examples: 140313
  - name: test
    num_bytes: 5848235
    num_examples: 5481
  - name: val
    num_bytes: 5748332
    num_examples: 5383
  download_size: 90308446
  dataset_size: 162187436
configs:
- config_name: default
  data_files:
  - split: train
    path: data/train-*
  - split: test
    path: data/test-*
  - split: val
    path: data/val-*
---

# arXiv Abstract

This dataset is based on the arXiv scientific papers and is used for the text expansion task. ([Download raw data here](https://drive.google.com/file/d/1b3rmCSIoh6VhD4HKWjI4HOW-cSwcwbeC/view?usp=sharing)).

I processed the raw data for the article expansion task with `extract_arXiv_abstract.py`. The processed dataset only contains the article ID and abstract fields, and the abstract length should be 100-300 tokens. The JSON objects are in the following format:

```
{ 
  'article_id': str,
  'abstract_text': List[str],
  'token_count': int
}