Update README.md
Browse files
README.md
CHANGED
|
@@ -58,3 +58,78 @@ configs:
|
|
| 58 |
- split: validation
|
| 59 |
path: data/validation-*
|
| 60 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 58 |
- split: validation
|
| 59 |
path: data/validation-*
|
| 60 |
---
|
| 61 |
+
|
| 62 |
+
|
| 63 |
+
# Dataset Card for arxiv_hep-th_primary Dataset
|
| 64 |
+
|
| 65 |
+
## Dataset Description
|
| 66 |
+
|
| 67 |
+
- **Homepage:** [Kaggle arXiv Dataset Homepage](https://www.kaggle.com/Cornell-University/arxiv)
|
| 68 |
+
- **Repository:** [hepthLlama](https://github.com/Paul-Richmond/hepthLlama)
|
| 69 |
+
- **Paper:** [tbd](tbd)
|
| 70 |
+
- **Point of Contact:** [Paul Richmond](mailto:p.richmond@qmul.ac.uk)
|
| 71 |
+
|
| 72 |
+
### Dataset Summary
|
| 73 |
+
This dataset contains metadata included in arXiv submissions.
|
| 74 |
+
|
| 75 |
+
## Dataset Structure
|
| 76 |
+
|
| 77 |
+
An example from the dataset looks as follows:
|
| 78 |
+
|
| 79 |
+
```
|
| 80 |
+
{'id': '0908.2896',
|
| 81 |
+
'submitter': 'Paul Richmond',
|
| 82 |
+
'authors': 'Neil Lambert, Paul Richmond',
|
| 83 |
+
'title': 'M2-Branes and Background Fields',
|
| 84 |
+
'comments': '19 pages',
|
| 85 |
+
'journal-ref': 'JHEP 0910:084,2009',
|
| 86 |
+
'doi': '10.1088/1126-6708/2009/10/084',
|
| 87 |
+
'report-no': None,
|
| 88 |
+
'categories': 'hep-th',
|
| 89 |
+
'license': 'http://arxiv.org/licenses/nonexclusive-distrib/1.0/',
|
| 90 |
+
'abstract': ' We discuss the coupling of multiple M2-branes to the background 3-form and\n6-form gauge fields of eleven-dimensional supergravity, including the coupling\nof the Fermions. In particular we show in detail how a natural generalization\nof the Myers flux-terms, along with the resulting curvature of the background\nmetric, leads to mass terms in the effective field theory.\n',
|
| 91 |
+
'versions': [{'created': 'Thu, 20 Aug 2009 14:23:37 GMT', 'version': 'v1'}],
|
| 92 |
+
'update_date': '2009-11-09',
|
| 93 |
+
'authors_parsed': [['Lambert', 'Neil', ''], ['Richmond', 'Paul', '']]}
|
| 94 |
+
```
|
| 95 |
+
|
| 96 |
+
### Languages
|
| 97 |
+
|
| 98 |
+
The text in the `abstract` field of the dataset is in English, however there may be examples
|
| 99 |
+
where the abstract also contains a translation into another language.
|
| 100 |
+
|
| 101 |
+
## Dataset Creation
|
| 102 |
+
|
| 103 |
+
### Curation Rationale
|
| 104 |
+
The starting point was to load v193 of the Kaggle arXiv Dataset which includes arXiv submissions upto 23rd August 2024.
|
| 105 |
+
The arXiv dataset contains the following data fields:
|
| 106 |
+
- `id`: ArXiv ID (can be used to access the paper)
|
| 107 |
+
- `submitter`: Who submitted the paper
|
| 108 |
+
- `authors`: Authors of the paper
|
| 109 |
+
- `title`: Title of the paper
|
| 110 |
+
- `comments`: Additional info, such as number of pages and figures
|
| 111 |
+
- `journal-ref`: Information about the journal the paper was published in
|
| 112 |
+
- `doi`: [Digital Object Identifier](https://www.doi.org)
|
| 113 |
+
- `report-no`: Report Number
|
| 114 |
+
- `abstract`: The abstract of the paper
|
| 115 |
+
- `categories`: Categories / tags in the ArXiv system
|
| 116 |
+
|
| 117 |
+
To arrive at the arxiv_hep-th_primary dataset, the full arXiv data
|
| 118 |
+
was filtered so that only `categories` which included 'hep-th' were retained.
|
| 119 |
+
This resulted in papers that were either primarily classified as 'hep-th' or appeared cross-listed.
|
| 120 |
+
For this dataset, the decision was made to focus only on papers primarily classified as 'hep-th'.
|
| 121 |
+
This meant taking only those abstracts where the first characters in `categories` were 'hep-th'
|
| 122 |
+
(see [here](https://info.arxiv.org/help/arxiv_identifier_for_services.html#indications-of-classification) for more details).
|
| 123 |
+
|
| 124 |
+
We also dropped entries whose `abstract` or `comments` contained the word 'Withdrawn' or 'withdrawn' and we removed the five records which appear in the repo `LLMsForHepth/arxiv_hepth_first_overfit`.
|
| 125 |
+
|
| 126 |
+
In addition, we have cleaned the data appearing in `abstract` by first replacing all occurences of '\n' with a whitespace and then removing any leading and trailing whitespace.
|
| 127 |
+
|
| 128 |
+
### Data splits
|
| 129 |
+
|
| 130 |
+
The dataset is split into a training, validation and test set with split percentages 70%, 15% and 15%. This was done by applying `train_test_split` twice (both with `seed=42`).
|
| 131 |
+
The final split sizes are as follows:
|
| 132 |
+
|
| 133 |
+
| Train | Test | Validation |
|
| 134 |
+
|:---:|:---:|:---:|
|
| 135 |
+
|73,768 | 15,808| 15,808 |
|