|
|
--- |
|
|
license: mit |
|
|
task_categories: |
|
|
- text-classification |
|
|
language: |
|
|
- en |
|
|
size_categories: |
|
|
- 100K<n<1M |
|
|
source_datasets: |
|
|
- original |
|
|
configs: |
|
|
- config_name: default |
|
|
data_files: |
|
|
- split: train |
|
|
path: train.zip |
|
|
- split: validation |
|
|
path: val.zip |
|
|
- split: test |
|
|
path: test.zip |
|
|
pretty_name: MSC |
|
|
--- |
|
|
## Dataset Description |
|
|
The text data (title and abstract) of 164,230 arXiv preprints which are associated with at least one [MSC (mathematical subject classification)](https://en.wikipedia.org/wiki/Mathematics_Subject_Classification) code. Predicting 3-character MSC codes based on the cleaned text (processed title+abstarct) amounts to a multi-label classification task. |
|
|
|
|
|
## Dataset Structure |
|
|
- The column `cleaned_text` should be used as the input of the text classification task. This is obtained from processing the text data (titles and abstracts) of math-related preprints. |
|
|
- The last 531 columns are one-hot encoded MSC classes, and should be used as target variables of the multi-label classification task. |
|
|
- Other columns are auxiliary: |
|
|
- `url`) the URL of the preprint (the latest version as of December 2023), |
|
|
- `title`) the original title, |
|
|
- `abstract`) the original abstract, |
|
|
- `primary_category`) the primary [arXiv category](https://arxiv.org/category_taxonomy) (for this data, almost always a category of the math archive, or the mathematical physics archive). |
|
|
- **Subtask**) Predicting `primary_category` based on `cleaned_text`, a multi-class text classification task with ~30 distinct labels. |
|
|
|
|
|
|
|
|
## Data Splits |
|
|
Stratified sampling was used for splitting the data so that the proportions of a target variable among the splits are not very different. |
|
|
|
|
|
|Dataset |Description |Number of instances | |
|
|
|---------|------------------|----------------------| |
|
|
|main.zip |the whole data |164,230 | |
|
|
|train.zip|the training set |104,675 | |
|
|
|val.zip |the validation set|18,540 | |
|
|
|test.zip |the test set |41,015 | |
|
|
|
|
|
## Data Collection and Cleaning |
|
|
The details are outlined in this [notebook](https://github.com/FilomKhash/Math-Preprint-Classifier/blob/main/Scarping%20and%20Cleaning%20the%20Data.ipynb). |
|
|
As for the raw data, with the help of the [arxiv package](https://pypi.org/project/arxiv/), we scraped preprints listed, or cross-listed, under the math archive. This raw data was then processed: |
|
|
|
|
|
- dropping preprints with an abnormally high number of versions, |
|
|
- keeping only the last arXiv version, |
|
|
- dropping preprints whose metadata does not include any MSC class, |
|
|
- dropping entries with pre-2010 mathematics subject classification convention, |
|
|
- concatenating abstract and title strings and carrying out the following steps to obtain the `cleaned_text` column: |
|
|
- removing the LaTeX math environment and URL citations, |
|
|
- make the text lower case, normalizing accents and removing special characters, |
|
|
- removing English and some corpus-specific stop words, |
|
|
- stemming. |
|
|
|
|
|
## Citation |
|
|
<https://github.com/FilomKhash/Math-Preprint-Classifier> |