File size: 3,797 Bytes
63d1efe
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
acb5a3c
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
---
dataset_info:
  features:
  - name: id
    dtype: string
  - name: submitter
    dtype: string
  - name: authors
    dtype: string
  - name: title
    dtype: string
  - name: comments
    dtype: string
  - name: journal-ref
    dtype: string
  - name: doi
    dtype: string
  - name: report-no
    dtype: string
  - name: categories
    dtype: string
  - name: license
    dtype: string
  - name: orig_abstract
    dtype: string
  - name: versions
    list:
    - name: created
      dtype: string
    - name: version
      dtype: string
  - name: update_date
    dtype: string
  - name: authors_parsed
    sequence:
      sequence: string
  - name: abstract
    dtype: string
  splits:
  - name: train
    num_bytes: 295844767.0072125
    num_examples: 137136
  - name: test
    num_bytes: 63396848.15103951
    num_examples: 29387
  - name: validation
    num_bytes: 63394690.841747954
    num_examples: 29386
  download_size: 236976269
  dataset_size: 422636305.99999994
configs:
- config_name: default
  data_files:
  - split: train
    path: data/train-*
  - split: test
    path: data/test-*
  - split: validation
    path: data/validation-*
---


# Dataset Card for hep-ph_gr-qc_primary Dataset

## Dataset Description

- **Homepage:** [Kaggle arXiv Dataset Homepage](https://www.kaggle.com/Cornell-University/arxiv)
- **Repository:** [hepthLlama](https://github.com/Paul-Richmond/hepthLlama)
- **Paper:** [tbd](tbd)
- **Point of Contact:** [Paul Richmond](mailto:p.richmond@qmul.ac.uk)

### Dataset Summary
This dataset contains metadata included in arXiv submissions.

## Dataset Structure

### Languages

The text in the `abstract` field of the dataset is in English, however there may be examples 
where the abstract also contains a translation into another language. 

## Dataset Creation

### Curation Rationale
The starting point was to load v193 of the Kaggle arXiv Dataset which includes arXiv submissions upto 23rd August 2024.
The arXiv dataset contains the following data fields:
- `id`: ArXiv ID (can be used to access the paper)
- `submitter`: Who submitted the paper
- `authors`: Authors of the paper
- `title`: Title of the paper
- `comments`: Additional info, such as number of pages and figures
- `journal-ref`: Information about the journal the paper was published in
- `doi`: [Digital Object Identifier](https://www.doi.org)
- `report-no`: Report Number
- `abstract`: The abstract of the paper
- `categories`: Categories / tags in the ArXiv system

To arrive at the hep-ph_gr-qc_primary dataset, the full arXiv data 
was filtered so that only `categories` which included 'hep-ph' or 'gr-qc' were retained.
This resulted in papers that were either primarily classified as 'hep-ph' or 'gr-qc' or appeared cross-listed.
For this dataset, the decision was made to focus only on papers primarily classified as either 'hep-ph' or 'gr-qc'. 
This meant taking only those abstracts where the first characters in `categories` were either 'hep-ph' or 'gr-qc' 
(see [here](https://info.arxiv.org/help/arxiv_identifier_for_services.html#indications-of-classification) for more details).

We also dropped entries whose `abstract` or `comments` contained the word 'Withdrawn' or 'withdrawn' and we removed the five records which appear in the repo `LLMsForHepth/arxiv_hepth_first_overfit`.

In addition, we have cleaned the data appearing in `abstract` by first replacing all occurences of '\n' with a whitespace and then removing any leading and trailing whitespace.

### Data splits

The dataset is split into a training, validation and test set with split percentages 70%, 15% and 15%. This was done by applying `train_test_split` twice (both with `seed=42`).
The final split sizes are as follows:

| Train | Test | Validation |
|:---:|:---:|:---:|
|137,136 | 29,387| 29,386   |