paulrichmond's picture
Update README.md
acb5a3c verified
---
dataset_info:
features:
- name: id
dtype: string
- name: submitter
dtype: string
- name: authors
dtype: string
- name: title
dtype: string
- name: comments
dtype: string
- name: journal-ref
dtype: string
- name: doi
dtype: string
- name: report-no
dtype: string
- name: categories
dtype: string
- name: license
dtype: string
- name: orig_abstract
dtype: string
- name: versions
list:
- name: created
dtype: string
- name: version
dtype: string
- name: update_date
dtype: string
- name: authors_parsed
sequence:
sequence: string
- name: abstract
dtype: string
splits:
- name: train
num_bytes: 295844767.0072125
num_examples: 137136
- name: test
num_bytes: 63396848.15103951
num_examples: 29387
- name: validation
num_bytes: 63394690.841747954
num_examples: 29386
download_size: 236976269
dataset_size: 422636305.99999994
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
- split: validation
path: data/validation-*
---
# Dataset Card for hep-ph_gr-qc_primary Dataset
## Dataset Description
- **Homepage:** [Kaggle arXiv Dataset Homepage](https://www.kaggle.com/Cornell-University/arxiv)
- **Repository:** [hepthLlama](https://github.com/Paul-Richmond/hepthLlama)
- **Paper:** [tbd](tbd)
- **Point of Contact:** [Paul Richmond](mailto:p.richmond@qmul.ac.uk)
### Dataset Summary
This dataset contains metadata included in arXiv submissions.
## Dataset Structure
### Languages
The text in the `abstract` field of the dataset is in English, however there may be examples
where the abstract also contains a translation into another language.
## Dataset Creation
### Curation Rationale
The starting point was to load v193 of the Kaggle arXiv Dataset which includes arXiv submissions upto 23rd August 2024.
The arXiv dataset contains the following data fields:
- `id`: ArXiv ID (can be used to access the paper)
- `submitter`: Who submitted the paper
- `authors`: Authors of the paper
- `title`: Title of the paper
- `comments`: Additional info, such as number of pages and figures
- `journal-ref`: Information about the journal the paper was published in
- `doi`: [Digital Object Identifier](https://www.doi.org)
- `report-no`: Report Number
- `abstract`: The abstract of the paper
- `categories`: Categories / tags in the ArXiv system
To arrive at the hep-ph_gr-qc_primary dataset, the full arXiv data
was filtered so that only `categories` which included 'hep-ph' or 'gr-qc' were retained.
This resulted in papers that were either primarily classified as 'hep-ph' or 'gr-qc' or appeared cross-listed.
For this dataset, the decision was made to focus only on papers primarily classified as either 'hep-ph' or 'gr-qc'.
This meant taking only those abstracts where the first characters in `categories` were either 'hep-ph' or 'gr-qc'
(see [here](https://info.arxiv.org/help/arxiv_identifier_for_services.html#indications-of-classification) for more details).
We also dropped entries whose `abstract` or `comments` contained the word 'Withdrawn' or 'withdrawn' and we removed the five records which appear in the repo `LLMsForHepth/arxiv_hepth_first_overfit`.
In addition, we have cleaned the data appearing in `abstract` by first replacing all occurences of '\n' with a whitespace and then removing any leading and trailing whitespace.
### Data splits
The dataset is split into a training, validation and test set with split percentages 70%, 15% and 15%. This was done by applying `train_test_split` twice (both with `seed=42`).
The final split sizes are as follows:
| Train | Test | Validation |
|:---:|:---:|:---:|
|137,136 | 29,387| 29,386 |