hep-th_primary / README.md
paulrichmond's picture
Update README.md
9efb4ad verified
metadata
dataset_info:
  features:
    - name: id
      dtype: string
    - name: submitter
      dtype: string
    - name: authors
      dtype: string
    - name: title
      dtype: string
    - name: comments
      dtype: string
    - name: journal-ref
      dtype: string
    - name: doi
      dtype: string
    - name: report-no
      dtype: string
    - name: categories
      dtype: string
    - name: license
      dtype: string
    - name: orig_abstract
      dtype: string
    - name: versions
      list:
        - name: created
          dtype: string
        - name: version
          dtype: string
    - name: update_date
      dtype: string
    - name: authors_parsed
      sequence:
        sequence: string
    - name: abstract
      dtype: string
  splits:
    - name: train
      num_bytes: 147667993.3685569
      num_examples: 73768
    - name: test
      num_bytes: 31644285.315721553
      num_examples: 15808
    - name: validation
      num_bytes: 31644285.315721553
      num_examples: 15808
  download_size: 115280347
  dataset_size: 210956563.99999997
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*
      - split: test
        path: data/test-*
      - split: validation
        path: data/validation-*

Dataset Card for arxiv_hep-th_primary Dataset

Dataset Description

Dataset Summary

This dataset contains metadata included in arXiv submissions.

Dataset Structure

An example from the dataset looks as follows:

{'id': '0908.2896',
 'submitter': 'Paul Richmond',
 'authors': 'Neil Lambert, Paul Richmond',
 'title': 'M2-Branes and Background Fields',
 'comments': '19 pages',
 'journal-ref': 'JHEP 0910:084,2009',
 'doi': '10.1088/1126-6708/2009/10/084',
 'report-no': None,
 'categories': 'hep-th',
 'license': 'http://arxiv.org/licenses/nonexclusive-distrib/1.0/',
 'abstract': '  We discuss the coupling of multiple M2-branes to the background 3-form and\n6-form gauge fields of eleven-dimensional supergravity, including the coupling\nof the Fermions. In particular we show in detail how a natural generalization\nof the Myers flux-terms, along with the resulting curvature of the background\nmetric, leads to mass terms in the effective field theory.\n',
 'versions': [{'created': 'Thu, 20 Aug 2009 14:23:37 GMT', 'version': 'v1'}],
 'update_date': '2009-11-09',
 'authors_parsed': [['Lambert', 'Neil', ''], ['Richmond', 'Paul', '']]}

Languages

The text in the abstract field of the dataset is in English, however there may be examples where the abstract also contains a translation into another language.

Dataset Creation

Curation Rationale

The starting point was to load v193 of the Kaggle arXiv Dataset which includes arXiv submissions upto 23rd August 2024. The arXiv dataset contains the following data fields:

  • id: ArXiv ID (can be used to access the paper)
  • submitter: Who submitted the paper
  • authors: Authors of the paper
  • title: Title of the paper
  • comments: Additional info, such as number of pages and figures
  • journal-ref: Information about the journal the paper was published in
  • doi: Digital Object Identifier
  • report-no: Report Number
  • abstract: The abstract of the paper
  • categories: Categories / tags in the ArXiv system

To arrive at the arxiv_hep-th_primary dataset, the full arXiv data was filtered so that only categories which included 'hep-th' were retained. This resulted in papers that were either primarily classified as 'hep-th' or appeared cross-listed. For this dataset, the decision was made to focus only on papers primarily classified as 'hep-th'. This meant taking only those abstracts where the first characters in categories were 'hep-th' (see here for more details).

We also dropped entries whose abstract or comments contained the word 'Withdrawn' or 'withdrawn' and we removed the five records which appear in the repo LLMsForHepth/arxiv_hepth_first_overfit.

In addition, we have cleaned the data appearing in abstract by first replacing all occurences of '\n' with a whitespace and then removing any leading and trailing whitespace.

Data splits

The dataset is split into a training, validation and test set with split percentages 70%, 15% and 15%. This was done by applying train_test_split twice (both with seed=42). The final split sizes are as follows:

Train Test Validation
73,768 15,808 15,808