Upload folder using huggingface_hub
Browse files- README.md +171 -0
- create_dataset.py +324 -0
- explore_dataset.py +420 -0
- requirements.txt +7 -0
README.md
ADDED
|
@@ -0,0 +1,171 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
license: mit
|
| 3 |
+
task_categories:
|
| 4 |
+
- text-classification
|
| 5 |
+
- summarization
|
| 6 |
+
language:
|
| 7 |
+
- en
|
| 8 |
+
tags:
|
| 9 |
+
- arxiv
|
| 10 |
+
- academic-papers
|
| 11 |
+
- research
|
| 12 |
+
- computer-science
|
| 13 |
+
- machine-learning
|
| 14 |
+
size_categories:
|
| 15 |
+
- 1K<n<10K
|
| 16 |
+
---
|
| 17 |
+
|
| 18 |
+
# CS/ML Academic Papers Dataset
|
| 19 |
+
|
| 20 |
+
A curated dataset of academic paper metadata sourced from [arXiv](https://arxiv.org/), spanning key Computer Science and Machine Learning categories. Designed for research in text classification, topic modelling, summarisation, and scientometric analysis.
|
| 21 |
+
|
| 22 |
+
## Dataset Description
|
| 23 |
+
|
| 24 |
+
### Overview
|
| 25 |
+
|
| 26 |
+
This dataset contains metadata for **2,500+** recently published papers across five core arXiv categories at the intersection of computer science and machine learning:
|
| 27 |
+
|
| 28 |
+
| Category | Description |
|
| 29 |
+
|-----------|-------------------------------------|
|
| 30 |
+
| `cs.AI` | Artificial Intelligence |
|
| 31 |
+
| `cs.CL` | Computation and Language (NLP) |
|
| 32 |
+
| `cs.CV` | Computer Vision and Pattern Recog. |
|
| 33 |
+
| `cs.LG` | Machine Learning |
|
| 34 |
+
| `stat.ML` | Statistics — Machine Learning |
|
| 35 |
+
|
| 36 |
+
Papers are sorted by submission date (most recent first) and deduplicated by arXiv ID, so cross-listed papers appear only once.
|
| 37 |
+
|
| 38 |
+
### Collection Methodology
|
| 39 |
+
|
| 40 |
+
1. The [arXiv API](https://info.arxiv.org/help/api/index.html) is queried via the [`arxiv`](https://pypi.org/project/arxiv/) Python package.
|
| 41 |
+
2. For each of the five categories, the 500 most recent papers are fetched (configurable).
|
| 42 |
+
3. Results are deduplicated on `arxiv_id` — papers cross-listed in multiple categories keep only their first occurrence.
|
| 43 |
+
4. Rate limiting (3.5 s between pages, 5 retries on failure) ensures compliance with arXiv's API Terms of Service.
|
| 44 |
+
|
| 45 |
+
The collection script is fully reproducible: see [`create_dataset.py`](create_dataset.py).
|
| 46 |
+
|
| 47 |
+
### Schema
|
| 48 |
+
|
| 49 |
+
| Column | Type | Description |
|
| 50 |
+
|--------------------|----------------|----------------------------------------------------------|
|
| 51 |
+
| `arxiv_id` | `string` | arXiv identifier (e.g. `2401.12345v1`) |
|
| 52 |
+
| `title` | `string` | Paper title, whitespace-normalised |
|
| 53 |
+
| `abstract` | `string` | Full abstract text, whitespace-normalised |
|
| 54 |
+
| `authors` | `list[string]` | Ordered list of author names |
|
| 55 |
+
| `categories` | `list[string]` | All arXiv categories assigned to the paper |
|
| 56 |
+
| `primary_category` | `string` | The primary arXiv category |
|
| 57 |
+
| `published` | `string` | ISO-8601 publication timestamp |
|
| 58 |
+
| `updated` | `string` | ISO-8601 last-updated timestamp |
|
| 59 |
+
| `doi` | `string` | Digital Object Identifier (empty string if unavailable) |
|
| 60 |
+
| `url` | `string` | Canonical arXiv abstract URL |
|
| 61 |
+
|
| 62 |
+
### Splits
|
| 63 |
+
|
| 64 |
+
| Split | Approximate Size |
|
| 65 |
+
|---------|------------------|
|
| 66 |
+
| `train` | ~90 % |
|
| 67 |
+
| `test` | ~10 % |
|
| 68 |
+
|
| 69 |
+
The split is performed with a fixed random seed (`42`) for reproducibility.
|
| 70 |
+
|
| 71 |
+
## Intended Uses
|
| 72 |
+
|
| 73 |
+
- **Text classification** — predict the primary arXiv category from the title or abstract.
|
| 74 |
+
- **Summarisation** — generate concise summaries from abstracts or use titles as abstractive targets.
|
| 75 |
+
- **Topic modelling** — discover latent research themes across categories.
|
| 76 |
+
- **Scientometrics** — analyse publication trends, author collaboration patterns, and category overlap.
|
| 77 |
+
- **Information retrieval** — build search or recommendation engines over academic literature.
|
| 78 |
+
|
| 79 |
+
## Example
|
| 80 |
+
|
| 81 |
+
```python
|
| 82 |
+
from datasets import load_dataset
|
| 83 |
+
|
| 84 |
+
ds = load_dataset("gr8monk3ys/cs-ml-academic-papers")
|
| 85 |
+
|
| 86 |
+
# Peek at the first training example
|
| 87 |
+
print(ds["train"][0])
|
| 88 |
+
# {
|
| 89 |
+
# "arxiv_id": "2401.12345v1",
|
| 90 |
+
# "title": "An Example Paper on Large Language Models",
|
| 91 |
+
# "abstract": "We propose a novel method for ...",
|
| 92 |
+
# "authors": ["Alice Researcher", "Bob Scientist"],
|
| 93 |
+
# "categories": ["cs.CL", "cs.AI"],
|
| 94 |
+
# "primary_category": "cs.CL",
|
| 95 |
+
# "published": "2024-01-20T18:00:00+00:00",
|
| 96 |
+
# "updated": "2024-01-22T12:30:00+00:00",
|
| 97 |
+
# "doi": "10.1234/example.2024.001",
|
| 98 |
+
# "url": "http://arxiv.org/abs/2401.12345v1"
|
| 99 |
+
# }
|
| 100 |
+
```
|
| 101 |
+
|
| 102 |
+
## Reproducing the Dataset
|
| 103 |
+
|
| 104 |
+
```bash
|
| 105 |
+
# Install dependencies
|
| 106 |
+
pip install -r requirements.txt
|
| 107 |
+
|
| 108 |
+
# Fetch papers and save locally (Parquet + HF Dataset format)
|
| 109 |
+
python create_dataset.py
|
| 110 |
+
|
| 111 |
+
# Optionally push to the HuggingFace Hub
|
| 112 |
+
python create_dataset.py --push --hf-repo gr8monk3ys/cs-ml-academic-papers
|
| 113 |
+
|
| 114 |
+
# Run exploratory data analysis and generate plots
|
| 115 |
+
python explore_dataset.py
|
| 116 |
+
```
|
| 117 |
+
|
| 118 |
+
### CLI Options — `create_dataset.py`
|
| 119 |
+
|
| 120 |
+
| Flag | Default | Description |
|
| 121 |
+
|-------------------|--------------------------------------|----------------------------------------|
|
| 122 |
+
| `--per-category` | `500` | Papers to fetch per arXiv category |
|
| 123 |
+
| `--push` | off | Push dataset to HuggingFace Hub |
|
| 124 |
+
| `--hf-repo` | `gr8monk3ys/cs-ml-academic-papers` | Hub repository ID |
|
| 125 |
+
| `--output-dir` | `./data` | Local output directory |
|
| 126 |
+
| `--verbose` | off | Enable debug logging |
|
| 127 |
+
|
| 128 |
+
### CLI Options — `explore_dataset.py`
|
| 129 |
+
|
| 130 |
+
| Flag | Default | Description |
|
| 131 |
+
|----------------|--------------------------------------|----------------------------------------|
|
| 132 |
+
| `--data-dir` | `./data` | Local data directory |
|
| 133 |
+
| `--from-hub` | — | Load from HF Hub repo instead |
|
| 134 |
+
| `--plots-dir` | `./plots` | Output directory for PNG plots |
|
| 135 |
+
| `--verbose` | off | Enable debug logging |
|
| 136 |
+
|
| 137 |
+
## Generated Visualisations
|
| 138 |
+
|
| 139 |
+
The EDA script (`explore_dataset.py`) produces the following plots:
|
| 140 |
+
|
| 141 |
+
- **category_distribution.png** — horizontal bar chart of paper counts per primary category.
|
| 142 |
+
- **abstract_length_histogram.png** — word-count distribution of abstracts with mean/median markers.
|
| 143 |
+
- **abstract_length_by_category.png** — box plots comparing abstract lengths across categories.
|
| 144 |
+
- **authors_per_paper.png** — histogram of author counts.
|
| 145 |
+
- **publication_timeline.png** — monthly publication volume over time.
|
| 146 |
+
- **tfidf_terms_heatmap.png** — heatmap of the top TF-IDF terms per category.
|
| 147 |
+
|
| 148 |
+
## Limitations and Bias
|
| 149 |
+
|
| 150 |
+
- The dataset is a **snapshot** — arXiv receives thousands of new submissions daily; re-running the collection script at different times will produce different results.
|
| 151 |
+
- Only **five categories** are included. Many relevant papers (e.g., `cs.IR`, `cs.RO`, `math.OC`) are excluded.
|
| 152 |
+
- Papers cross-listed under multiple categories are kept only once, so secondary categories are under-represented in the primary-category distribution.
|
| 153 |
+
- Metadata only — **full paper text and PDFs are not included**.
|
| 154 |
+
|
| 155 |
+
## Citation
|
| 156 |
+
|
| 157 |
+
If you use this dataset in your work, please cite it as:
|
| 158 |
+
|
| 159 |
+
```bibtex
|
| 160 |
+
@misc{scaturchio2024csml_papers,
|
| 161 |
+
author = {Lorenzo Scaturchio},
|
| 162 |
+
title = {{CS/ML Academic Papers Dataset}},
|
| 163 |
+
year = {2024},
|
| 164 |
+
publisher = {HuggingFace},
|
| 165 |
+
howpublished = {\url{https://huggingface.co/datasets/gr8monk3ys/cs-ml-academic-papers}},
|
| 166 |
+
}
|
| 167 |
+
```
|
| 168 |
+
|
| 169 |
+
## License
|
| 170 |
+
|
| 171 |
+
This dataset is released under the [MIT License](https://opensource.org/licenses/MIT). The underlying paper metadata is sourced from arXiv and is subject to arXiv's [Terms of Use](https://info.arxiv.org/help/api/tou.html).
|
create_dataset.py
ADDED
|
@@ -0,0 +1,324 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
#!/usr/bin/env python3
|
| 2 |
+
"""
|
| 3 |
+
create_dataset.py
|
| 4 |
+
|
| 5 |
+
Fetches academic paper metadata from the arXiv API across multiple CS/ML
|
| 6 |
+
categories, deduplicates the results, and publishes a clean HuggingFace
|
| 7 |
+
Dataset in Parquet format.
|
| 8 |
+
|
| 9 |
+
Usage
|
| 10 |
+
-----
|
| 11 |
+
# Fetch papers and save locally
|
| 12 |
+
python create_dataset.py
|
| 13 |
+
|
| 14 |
+
# Fetch papers and push to the HuggingFace Hub
|
| 15 |
+
python create_dataset.py --push --hf-repo gr8monk3ys/cs-ml-academic-papers
|
| 16 |
+
|
| 17 |
+
# Customise the number of papers per category
|
| 18 |
+
python create_dataset.py --per-category 1000
|
| 19 |
+
"""
|
| 20 |
+
|
| 21 |
+
from __future__ import annotations
|
| 22 |
+
|
| 23 |
+
import argparse
|
| 24 |
+
import logging
|
| 25 |
+
import re
|
| 26 |
+
import time
|
| 27 |
+
from datetime import datetime
|
| 28 |
+
from pathlib import Path
|
| 29 |
+
from typing import Any
|
| 30 |
+
|
| 31 |
+
import arxiv
|
| 32 |
+
import pandas as pd
|
| 33 |
+
from datasets import Dataset, DatasetDict, Features, Sequence, Value
|
| 34 |
+
from huggingface_hub import HfApi
|
| 35 |
+
from tqdm import tqdm
|
| 36 |
+
|
| 37 |
+
# ---------------------------------------------------------------------------
|
| 38 |
+
# Configuration
|
| 39 |
+
# ---------------------------------------------------------------------------
|
| 40 |
+
|
| 41 |
+
CATEGORIES: list[str] = ["cs.AI", "cs.CL", "cs.CV", "cs.LG", "stat.ML"]
|
| 42 |
+
|
| 43 |
+
DEFAULT_PER_CATEGORY: int = 500
|
| 44 |
+
|
| 45 |
+
# arXiv API courtesy: wait between successive queries to avoid rate-limiting.
|
| 46 |
+
# The arXiv API Terms of Service recommend no more than one request every
|
| 47 |
+
# three seconds. We are conservative and wait a bit longer between pages.
|
| 48 |
+
REQUEST_DELAY_SECONDS: float = 3.5
|
| 49 |
+
|
| 50 |
+
# Each arXiv search page can return at most this many results.
|
| 51 |
+
PAGE_SIZE: int = 100
|
| 52 |
+
|
| 53 |
+
OUTPUT_DIR: Path = Path(__file__).resolve().parent / "data"
|
| 54 |
+
|
| 55 |
+
LOG = logging.getLogger("create_dataset")
|
| 56 |
+
|
| 57 |
+
# ---------------------------------------------------------------------------
|
| 58 |
+
# Helpers
|
| 59 |
+
# ---------------------------------------------------------------------------
|
| 60 |
+
|
| 61 |
+
|
| 62 |
+
def _clean_text(text: str) -> str:
|
| 63 |
+
"""Collapse whitespace and strip leading/trailing blanks."""
|
| 64 |
+
return re.sub(r"\s+", " ", text).strip()
|
| 65 |
+
|
| 66 |
+
|
| 67 |
+
def _extract_doi(entry: arxiv.Result) -> str:
|
| 68 |
+
"""Return the DOI if present, otherwise an empty string."""
|
| 69 |
+
return entry.doi or ""
|
| 70 |
+
|
| 71 |
+
|
| 72 |
+
def _extract_authors(entry: arxiv.Result) -> list[str]:
|
| 73 |
+
"""Return a sorted list of author names."""
|
| 74 |
+
return [str(a) for a in entry.authors]
|
| 75 |
+
|
| 76 |
+
|
| 77 |
+
def _entry_to_record(entry: arxiv.Result) -> dict[str, Any]:
|
| 78 |
+
"""Convert an arxiv.Result into a flat dictionary."""
|
| 79 |
+
return {
|
| 80 |
+
"arxiv_id": entry.entry_id.split("/abs/")[-1],
|
| 81 |
+
"title": _clean_text(entry.title),
|
| 82 |
+
"abstract": _clean_text(entry.summary),
|
| 83 |
+
"authors": _extract_authors(entry),
|
| 84 |
+
"categories": list(entry.categories),
|
| 85 |
+
"primary_category": entry.primary_category,
|
| 86 |
+
"published": entry.published.isoformat() if entry.published else "",
|
| 87 |
+
"updated": entry.updated.isoformat() if entry.updated else "",
|
| 88 |
+
"doi": _extract_doi(entry),
|
| 89 |
+
"url": entry.entry_id,
|
| 90 |
+
}
|
| 91 |
+
|
| 92 |
+
|
| 93 |
+
# ---------------------------------------------------------------------------
|
| 94 |
+
# Fetching
|
| 95 |
+
# ---------------------------------------------------------------------------
|
| 96 |
+
|
| 97 |
+
|
| 98 |
+
def fetch_papers_for_category(
|
| 99 |
+
category: str,
|
| 100 |
+
max_results: int = DEFAULT_PER_CATEGORY,
|
| 101 |
+
) -> list[dict[str, Any]]:
|
| 102 |
+
"""
|
| 103 |
+
Query the arXiv API for papers in *category*, respecting rate limits.
|
| 104 |
+
|
| 105 |
+
Parameters
|
| 106 |
+
----------
|
| 107 |
+
category:
|
| 108 |
+
An arXiv category string such as ``"cs.AI"`` or ``"stat.ML"``.
|
| 109 |
+
max_results:
|
| 110 |
+
Maximum number of papers to retrieve for this category.
|
| 111 |
+
|
| 112 |
+
Returns
|
| 113 |
+
-------
|
| 114 |
+
list[dict]
|
| 115 |
+
A list of paper-metadata dictionaries.
|
| 116 |
+
"""
|
| 117 |
+
LOG.info("Fetching up to %d papers for category: %s", max_results, category)
|
| 118 |
+
|
| 119 |
+
search = arxiv.Search(
|
| 120 |
+
query=f"cat:{category}",
|
| 121 |
+
max_results=max_results,
|
| 122 |
+
sort_by=arxiv.SortCriterion.SubmittedDate,
|
| 123 |
+
sort_order=arxiv.SortOrder.Descending,
|
| 124 |
+
)
|
| 125 |
+
|
| 126 |
+
client = arxiv.Client(
|
| 127 |
+
page_size=PAGE_SIZE,
|
| 128 |
+
delay_seconds=REQUEST_DELAY_SECONDS,
|
| 129 |
+
num_retries=5,
|
| 130 |
+
)
|
| 131 |
+
|
| 132 |
+
records: list[dict[str, Any]] = []
|
| 133 |
+
try:
|
| 134 |
+
for entry in tqdm(
|
| 135 |
+
client.results(search),
|
| 136 |
+
total=max_results,
|
| 137 |
+
desc=f" {category}",
|
| 138 |
+
unit="paper",
|
| 139 |
+
):
|
| 140 |
+
records.append(_entry_to_record(entry))
|
| 141 |
+
except arxiv.UnexpectedEmptyPageError:
|
| 142 |
+
LOG.warning(
|
| 143 |
+
"Received an empty page from arXiv for %s after %d results. "
|
| 144 |
+
"Continuing with what we have.",
|
| 145 |
+
category,
|
| 146 |
+
len(records),
|
| 147 |
+
)
|
| 148 |
+
except arxiv.HTTPError as exc:
|
| 149 |
+
LOG.error(
|
| 150 |
+
"HTTP error while fetching %s (collected %d so far): %s",
|
| 151 |
+
category,
|
| 152 |
+
len(records),
|
| 153 |
+
exc,
|
| 154 |
+
)
|
| 155 |
+
|
| 156 |
+
LOG.info("Collected %d papers for %s", len(records), category)
|
| 157 |
+
return records
|
| 158 |
+
|
| 159 |
+
|
| 160 |
+
def fetch_all_papers(
|
| 161 |
+
categories: list[str] | None = None,
|
| 162 |
+
per_category: int = DEFAULT_PER_CATEGORY,
|
| 163 |
+
) -> pd.DataFrame:
|
| 164 |
+
"""
|
| 165 |
+
Fetch papers across all requested categories and return a deduplicated
|
| 166 |
+
:class:`pandas.DataFrame`.
|
| 167 |
+
"""
|
| 168 |
+
categories = categories or CATEGORIES
|
| 169 |
+
all_records: list[dict[str, Any]] = []
|
| 170 |
+
|
| 171 |
+
for cat in categories:
|
| 172 |
+
records = fetch_papers_for_category(cat, max_results=per_category)
|
| 173 |
+
all_records.extend(records)
|
| 174 |
+
# Extra courtesy pause between categories
|
| 175 |
+
LOG.info("Pausing between categories ...")
|
| 176 |
+
time.sleep(REQUEST_DELAY_SECONDS)
|
| 177 |
+
|
| 178 |
+
df = pd.DataFrame(all_records)
|
| 179 |
+
before = len(df)
|
| 180 |
+
df = df.drop_duplicates(subset=["arxiv_id"], keep="first").reset_index(drop=True)
|
| 181 |
+
after = len(df)
|
| 182 |
+
LOG.info(
|
| 183 |
+
"Deduplicated %d -> %d records (%d duplicates removed)",
|
| 184 |
+
before,
|
| 185 |
+
after,
|
| 186 |
+
before - after,
|
| 187 |
+
)
|
| 188 |
+
return df
|
| 189 |
+
|
| 190 |
+
|
| 191 |
+
# ---------------------------------------------------------------------------
|
| 192 |
+
# Dataset creation
|
| 193 |
+
# ---------------------------------------------------------------------------
|
| 194 |
+
|
| 195 |
+
FEATURES = Features(
|
| 196 |
+
{
|
| 197 |
+
"arxiv_id": Value("string"),
|
| 198 |
+
"title": Value("string"),
|
| 199 |
+
"abstract": Value("string"),
|
| 200 |
+
"authors": Sequence(Value("string")),
|
| 201 |
+
"categories": Sequence(Value("string")),
|
| 202 |
+
"primary_category": Value("string"),
|
| 203 |
+
"published": Value("string"),
|
| 204 |
+
"updated": Value("string"),
|
| 205 |
+
"doi": Value("string"),
|
| 206 |
+
"url": Value("string"),
|
| 207 |
+
}
|
| 208 |
+
)
|
| 209 |
+
|
| 210 |
+
|
| 211 |
+
def build_dataset(df: pd.DataFrame) -> DatasetDict:
|
| 212 |
+
"""
|
| 213 |
+
Convert a :class:`pandas.DataFrame` of paper records into a
|
| 214 |
+
:class:`datasets.DatasetDict` with ``train`` / ``test`` splits
|
| 215 |
+
(90 / 10).
|
| 216 |
+
"""
|
| 217 |
+
dataset = Dataset.from_pandas(df, features=FEATURES, preserve_index=False)
|
| 218 |
+
splits = dataset.train_test_split(test_size=0.1, seed=42)
|
| 219 |
+
return DatasetDict({"train": splits["train"], "test": splits["test"]})
|
| 220 |
+
|
| 221 |
+
|
| 222 |
+
def save_dataset(dataset_dict: DatasetDict, output_dir: Path) -> None:
|
| 223 |
+
"""Save the dataset to disk in Parquet format."""
|
| 224 |
+
output_dir.mkdir(parents=True, exist_ok=True)
|
| 225 |
+
dataset_dict.save_to_disk(str(output_dir / "hf_dataset"))
|
| 226 |
+
|
| 227 |
+
# Also save individual Parquet files for easy inspection.
|
| 228 |
+
for split_name, split_ds in dataset_dict.items():
|
| 229 |
+
parquet_path = output_dir / f"{split_name}.parquet"
|
| 230 |
+
split_ds.to_parquet(str(parquet_path))
|
| 231 |
+
LOG.info("Saved %s split (%d rows) -> %s", split_name, len(split_ds), parquet_path)
|
| 232 |
+
|
| 233 |
+
|
| 234 |
+
def push_to_hub(dataset_dict: DatasetDict, repo_id: str) -> None:
|
| 235 |
+
"""Push the dataset to the HuggingFace Hub."""
|
| 236 |
+
LOG.info("Pushing dataset to HuggingFace Hub: %s", repo_id)
|
| 237 |
+
dataset_dict.push_to_hub(repo_id, private=False)
|
| 238 |
+
LOG.info("Successfully pushed to %s", repo_id)
|
| 239 |
+
|
| 240 |
+
|
| 241 |
+
# ---------------------------------------------------------------------------
|
| 242 |
+
# CLI
|
| 243 |
+
# ---------------------------------------------------------------------------
|
| 244 |
+
|
| 245 |
+
|
| 246 |
+
def parse_args() -> argparse.Namespace:
|
| 247 |
+
parser = argparse.ArgumentParser(
|
| 248 |
+
description="Fetch arXiv paper metadata and create a HuggingFace Dataset.",
|
| 249 |
+
)
|
| 250 |
+
parser.add_argument(
|
| 251 |
+
"--per-category",
|
| 252 |
+
type=int,
|
| 253 |
+
default=DEFAULT_PER_CATEGORY,
|
| 254 |
+
help=f"Number of papers to fetch per category (default: {DEFAULT_PER_CATEGORY}).",
|
| 255 |
+
)
|
| 256 |
+
parser.add_argument(
|
| 257 |
+
"--push",
|
| 258 |
+
action="store_true",
|
| 259 |
+
help="Push the dataset to the HuggingFace Hub after creation.",
|
| 260 |
+
)
|
| 261 |
+
parser.add_argument(
|
| 262 |
+
"--hf-repo",
|
| 263 |
+
type=str,
|
| 264 |
+
default="gr8monk3ys/cs-ml-academic-papers",
|
| 265 |
+
help="HuggingFace Hub repo ID (default: gr8monk3ys/cs-ml-academic-papers).",
|
| 266 |
+
)
|
| 267 |
+
parser.add_argument(
|
| 268 |
+
"--output-dir",
|
| 269 |
+
type=str,
|
| 270 |
+
default=str(OUTPUT_DIR),
|
| 271 |
+
help=f"Local output directory (default: {OUTPUT_DIR}).",
|
| 272 |
+
)
|
| 273 |
+
parser.add_argument(
|
| 274 |
+
"--verbose",
|
| 275 |
+
action="store_true",
|
| 276 |
+
help="Enable debug logging.",
|
| 277 |
+
)
|
| 278 |
+
return parser.parse_args()
|
| 279 |
+
|
| 280 |
+
|
| 281 |
+
def main() -> None:
|
| 282 |
+
args = parse_args()
|
| 283 |
+
|
| 284 |
+
logging.basicConfig(
|
| 285 |
+
level=logging.DEBUG if args.verbose else logging.INFO,
|
| 286 |
+
format="%(asctime)s %(levelname)-8s %(name)s %(message)s",
|
| 287 |
+
datefmt="%Y-%m-%d %H:%M:%S",
|
| 288 |
+
)
|
| 289 |
+
|
| 290 |
+
output_dir = Path(args.output_dir)
|
| 291 |
+
LOG.info("=" * 60)
|
| 292 |
+
LOG.info("arXiv Academic Papers Dataset Builder")
|
| 293 |
+
LOG.info("=" * 60)
|
| 294 |
+
LOG.info("Categories : %s", ", ".join(CATEGORIES))
|
| 295 |
+
LOG.info("Per category : %d", args.per_category)
|
| 296 |
+
LOG.info("Output directory: %s", output_dir)
|
| 297 |
+
LOG.info("")
|
| 298 |
+
|
| 299 |
+
start = time.time()
|
| 300 |
+
|
| 301 |
+
# 1. Fetch ----------------------------------------------------------
|
| 302 |
+
df = fetch_all_papers(per_category=args.per_category)
|
| 303 |
+
|
| 304 |
+
# 2. Build dataset --------------------------------------------------
|
| 305 |
+
dataset_dict = build_dataset(df)
|
| 306 |
+
LOG.info(
|
| 307 |
+
"Dataset built — train: %d, test: %d",
|
| 308 |
+
len(dataset_dict["train"]),
|
| 309 |
+
len(dataset_dict["test"]),
|
| 310 |
+
)
|
| 311 |
+
|
| 312 |
+
# 3. Save locally ---------------------------------------------------
|
| 313 |
+
save_dataset(dataset_dict, output_dir)
|
| 314 |
+
|
| 315 |
+
# 4. (Optional) push to Hub -----------------------------------------
|
| 316 |
+
if args.push:
|
| 317 |
+
push_to_hub(dataset_dict, args.hf_repo)
|
| 318 |
+
|
| 319 |
+
elapsed = time.time() - start
|
| 320 |
+
LOG.info("Done in %.1f seconds.", elapsed)
|
| 321 |
+
|
| 322 |
+
|
| 323 |
+
if __name__ == "__main__":
|
| 324 |
+
main()
|
explore_dataset.py
ADDED
|
@@ -0,0 +1,420 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
#!/usr/bin/env python3
|
| 2 |
+
"""
|
| 3 |
+
explore_dataset.py
|
| 4 |
+
|
| 5 |
+
Exploratory Data Analysis (EDA) for the CS/ML Academic Papers Dataset.
|
| 6 |
+
|
| 7 |
+
Loads the locally-saved dataset (or downloads from HuggingFace Hub), computes
|
| 8 |
+
summary statistics, identifies top terms per category via TF-IDF, and saves
|
| 9 |
+
publication-ready visualisations as PNG files.
|
| 10 |
+
|
| 11 |
+
Usage
|
| 12 |
+
-----
|
| 13 |
+
# Analyse the local dataset
|
| 14 |
+
python explore_dataset.py
|
| 15 |
+
|
| 16 |
+
# Load from the HuggingFace Hub instead
|
| 17 |
+
python explore_dataset.py --from-hub gr8monk3ys/cs-ml-academic-papers
|
| 18 |
+
|
| 19 |
+
# Customise the output directory for plots
|
| 20 |
+
python explore_dataset.py --plots-dir ./plots
|
| 21 |
+
"""
|
| 22 |
+
|
| 23 |
+
from __future__ import annotations
|
| 24 |
+
|
| 25 |
+
import argparse
|
| 26 |
+
import logging
|
| 27 |
+
import textwrap
|
| 28 |
+
from collections import Counter
|
| 29 |
+
from pathlib import Path
|
| 30 |
+
|
| 31 |
+
import matplotlib
|
| 32 |
+
import matplotlib.pyplot as plt
|
| 33 |
+
import numpy as np
|
| 34 |
+
import pandas as pd
|
| 35 |
+
from datasets import DatasetDict, load_from_disk
|
| 36 |
+
from sklearn.feature_extraction.text import TfidfVectorizer
|
| 37 |
+
|
| 38 |
+
# Use non-interactive backend so the script works headlessly.
|
| 39 |
+
matplotlib.use("Agg")
|
| 40 |
+
|
| 41 |
+
LOG = logging.getLogger("explore_dataset")
|
| 42 |
+
|
| 43 |
+
DATA_DIR: Path = Path(__file__).resolve().parent / "data"
|
| 44 |
+
PLOTS_DIR: Path = Path(__file__).resolve().parent / "plots"
|
| 45 |
+
|
| 46 |
+
# Colour palette (colour-blind friendly, adapted from Tol's muted scheme).
|
| 47 |
+
PALETTE = ["#332288", "#88CCEE", "#44AA99", "#117733", "#999933",
|
| 48 |
+
"#DDCC77", "#CC6677", "#882255", "#AA4499"]
|
| 49 |
+
|
| 50 |
+
# ---------------------------------------------------------------------------
|
| 51 |
+
# Loading helpers
|
| 52 |
+
# ---------------------------------------------------------------------------
|
| 53 |
+
|
| 54 |
+
|
| 55 |
+
def load_local(data_dir: Path) -> pd.DataFrame:
|
| 56 |
+
"""Load the dataset from a local ``save_to_disk`` directory."""
|
| 57 |
+
ds_path = data_dir / "hf_dataset"
|
| 58 |
+
if not ds_path.exists():
|
| 59 |
+
raise FileNotFoundError(
|
| 60 |
+
f"No saved dataset found at {ds_path}. "
|
| 61 |
+
"Run create_dataset.py first or use --from-hub."
|
| 62 |
+
)
|
| 63 |
+
dd = load_from_disk(str(ds_path))
|
| 64 |
+
frames = [dd[split].to_pandas() for split in dd]
|
| 65 |
+
return pd.concat(frames, ignore_index=True)
|
| 66 |
+
|
| 67 |
+
|
| 68 |
+
def load_hub(repo_id: str) -> pd.DataFrame:
|
| 69 |
+
"""Download the dataset from the HuggingFace Hub."""
|
| 70 |
+
from datasets import load_dataset
|
| 71 |
+
|
| 72 |
+
dd = load_dataset(repo_id)
|
| 73 |
+
frames = [dd[split].to_pandas() for split in dd]
|
| 74 |
+
return pd.concat(frames, ignore_index=True)
|
| 75 |
+
|
| 76 |
+
|
| 77 |
+
# ---------------------------------------------------------------------------
|
| 78 |
+
# Statistics
|
| 79 |
+
# ---------------------------------------------------------------------------
|
| 80 |
+
|
| 81 |
+
|
| 82 |
+
def print_summary(df: pd.DataFrame) -> None:
|
| 83 |
+
"""Print high-level summary statistics to stdout."""
|
| 84 |
+
separator = "=" * 60
|
| 85 |
+
print(f"\n{separator}")
|
| 86 |
+
print(" CS/ML Academic Papers Dataset — Summary Statistics")
|
| 87 |
+
print(separator)
|
| 88 |
+
|
| 89 |
+
print(f"\n Total papers : {len(df):,}")
|
| 90 |
+
print(f" Unique arXiv IDs : {df['arxiv_id'].nunique():,}")
|
| 91 |
+
print(f" Unique primary cats : {df['primary_category'].nunique()}")
|
| 92 |
+
|
| 93 |
+
# Date range
|
| 94 |
+
if "published" in df.columns:
|
| 95 |
+
dates = pd.to_datetime(df["published"], errors="coerce")
|
| 96 |
+
valid = dates.dropna()
|
| 97 |
+
if len(valid) > 0:
|
| 98 |
+
print(f" Published date range : {valid.min():%Y-%m-%d} to {valid.max():%Y-%m-%d}")
|
| 99 |
+
|
| 100 |
+
# Authors
|
| 101 |
+
author_counts = df["authors"].apply(len)
|
| 102 |
+
print(f"\n Authors per paper (mean): {author_counts.mean():.1f}")
|
| 103 |
+
print(f" Authors per paper (med) : {author_counts.median():.0f}")
|
| 104 |
+
|
| 105 |
+
# Abstract lengths
|
| 106 |
+
abs_len = df["abstract"].str.split().str.len()
|
| 107 |
+
print(f"\n Abstract length (words):")
|
| 108 |
+
print(f" mean : {abs_len.mean():.0f}")
|
| 109 |
+
print(f" median : {abs_len.median():.0f}")
|
| 110 |
+
print(f" min : {abs_len.min():.0f}")
|
| 111 |
+
print(f" max : {abs_len.max():.0f}")
|
| 112 |
+
print(f" std : {abs_len.std():.1f}")
|
| 113 |
+
|
| 114 |
+
# Category distribution
|
| 115 |
+
print(f"\n Primary category distribution:")
|
| 116 |
+
for cat, count in df["primary_category"].value_counts().items():
|
| 117 |
+
pct = 100.0 * count / len(df)
|
| 118 |
+
print(f" {cat:<12s} {count:>5,} ({pct:5.1f}%)")
|
| 119 |
+
|
| 120 |
+
# DOI availability
|
| 121 |
+
has_doi = (df["doi"].str.len() > 0).sum()
|
| 122 |
+
print(f"\n Papers with DOI : {has_doi:,} ({100*has_doi/len(df):.1f}%)")
|
| 123 |
+
|
| 124 |
+
print(f"\n{separator}\n")
|
| 125 |
+
|
| 126 |
+
|
| 127 |
+
# ---------------------------------------------------------------------------
|
| 128 |
+
# TF-IDF keyword extraction
|
| 129 |
+
# ---------------------------------------------------------------------------
|
| 130 |
+
|
| 131 |
+
|
| 132 |
+
def top_tfidf_terms(
|
| 133 |
+
df: pd.DataFrame,
|
| 134 |
+
text_col: str = "abstract",
|
| 135 |
+
group_col: str = "primary_category",
|
| 136 |
+
top_n: int = 15,
|
| 137 |
+
) -> dict[str, list[tuple[str, float]]]:
|
| 138 |
+
"""
|
| 139 |
+
For each group in *group_col*, fit a TF-IDF vectoriser on the documents
|
| 140 |
+
belonging to that group and return the top-*n* terms by mean TF-IDF score.
|
| 141 |
+
"""
|
| 142 |
+
results: dict[str, list[tuple[str, float]]] = {}
|
| 143 |
+
|
| 144 |
+
vectorizer = TfidfVectorizer(
|
| 145 |
+
max_features=5000,
|
| 146 |
+
stop_words="english",
|
| 147 |
+
min_df=5,
|
| 148 |
+
max_df=0.85,
|
| 149 |
+
ngram_range=(1, 2),
|
| 150 |
+
token_pattern=r"(?u)\b[a-zA-Z][a-zA-Z+#\-]{2,}\b",
|
| 151 |
+
)
|
| 152 |
+
|
| 153 |
+
for group, sub_df in df.groupby(group_col):
|
| 154 |
+
texts = sub_df[text_col].tolist()
|
| 155 |
+
if len(texts) < 10:
|
| 156 |
+
LOG.warning("Skipping group %s — too few documents (%d).", group, len(texts))
|
| 157 |
+
continue
|
| 158 |
+
|
| 159 |
+
tfidf_matrix = vectorizer.fit_transform(texts)
|
| 160 |
+
mean_scores = np.asarray(tfidf_matrix.mean(axis=0)).flatten()
|
| 161 |
+
feature_names = vectorizer.get_feature_names_out()
|
| 162 |
+
top_indices = mean_scores.argsort()[::-1][:top_n]
|
| 163 |
+
results[group] = [
|
| 164 |
+
(feature_names[i], float(mean_scores[i])) for i in top_indices
|
| 165 |
+
]
|
| 166 |
+
|
| 167 |
+
return results
|
| 168 |
+
|
| 169 |
+
|
| 170 |
+
def print_top_terms(terms_by_cat: dict[str, list[tuple[str, float]]]) -> None:
|
| 171 |
+
"""Pretty-print TF-IDF top terms per category."""
|
| 172 |
+
print("=" * 60)
|
| 173 |
+
print(" Top TF-IDF Terms per Category")
|
| 174 |
+
print("=" * 60)
|
| 175 |
+
for cat in sorted(terms_by_cat):
|
| 176 |
+
print(f"\n [{cat}]")
|
| 177 |
+
for rank, (term, score) in enumerate(terms_by_cat[cat], 1):
|
| 178 |
+
print(f" {rank:>2}. {term:<30s} (score: {score:.4f})")
|
| 179 |
+
print()
|
| 180 |
+
|
| 181 |
+
|
| 182 |
+
# ---------------------------------------------------------------------------
|
| 183 |
+
# Visualisations
|
| 184 |
+
# ---------------------------------------------------------------------------
|
| 185 |
+
|
| 186 |
+
|
| 187 |
+
def _savefig(fig: plt.Figure, path: Path) -> None:
|
| 188 |
+
fig.savefig(str(path), dpi=150, bbox_inches="tight", facecolor="white")
|
| 189 |
+
plt.close(fig)
|
| 190 |
+
LOG.info("Saved plot -> %s", path)
|
| 191 |
+
|
| 192 |
+
|
| 193 |
+
def plot_category_distribution(df: pd.DataFrame, output_dir: Path) -> None:
|
| 194 |
+
"""Bar chart of primary-category counts."""
|
| 195 |
+
counts = df["primary_category"].value_counts().sort_values(ascending=True)
|
| 196 |
+
|
| 197 |
+
fig, ax = plt.subplots(figsize=(8, 5))
|
| 198 |
+
bars = ax.barh(counts.index, counts.values, color=PALETTE[: len(counts)])
|
| 199 |
+
ax.bar_label(bars, padding=4, fontsize=9)
|
| 200 |
+
ax.set_xlabel("Number of Papers")
|
| 201 |
+
ax.set_title("Papers by Primary arXiv Category")
|
| 202 |
+
ax.spines[["top", "right"]].set_visible(False)
|
| 203 |
+
fig.tight_layout()
|
| 204 |
+
_savefig(fig, output_dir / "category_distribution.png")
|
| 205 |
+
|
| 206 |
+
|
| 207 |
+
def plot_abstract_length_histogram(df: pd.DataFrame, output_dir: Path) -> None:
|
| 208 |
+
"""Histogram of abstract word counts."""
|
| 209 |
+
lengths = df["abstract"].str.split().str.len()
|
| 210 |
+
|
| 211 |
+
fig, ax = plt.subplots(figsize=(8, 5))
|
| 212 |
+
ax.hist(lengths, bins=50, color=PALETTE[0], edgecolor="white", alpha=0.85)
|
| 213 |
+
ax.axvline(lengths.median(), color=PALETTE[6], linestyle="--", linewidth=1.5,
|
| 214 |
+
label=f"Median ({lengths.median():.0f} words)")
|
| 215 |
+
ax.axvline(lengths.mean(), color=PALETTE[4], linestyle=":", linewidth=1.5,
|
| 216 |
+
label=f"Mean ({lengths.mean():.0f} words)")
|
| 217 |
+
ax.set_xlabel("Abstract Length (words)")
|
| 218 |
+
ax.set_ylabel("Frequency")
|
| 219 |
+
ax.set_title("Distribution of Abstract Lengths")
|
| 220 |
+
ax.legend(frameon=False)
|
| 221 |
+
ax.spines[["top", "right"]].set_visible(False)
|
| 222 |
+
fig.tight_layout()
|
| 223 |
+
_savefig(fig, output_dir / "abstract_length_histogram.png")
|
| 224 |
+
|
| 225 |
+
|
| 226 |
+
def plot_abstract_length_by_category(df: pd.DataFrame, output_dir: Path) -> None:
|
| 227 |
+
"""Box plot of abstract lengths grouped by primary category."""
|
| 228 |
+
df = df.copy()
|
| 229 |
+
df["abstract_words"] = df["abstract"].str.split().str.len()
|
| 230 |
+
|
| 231 |
+
cats = df["primary_category"].value_counts().index.tolist()
|
| 232 |
+
data = [df.loc[df["primary_category"] == c, "abstract_words"].values for c in cats]
|
| 233 |
+
|
| 234 |
+
fig, ax = plt.subplots(figsize=(8, 5))
|
| 235 |
+
bp = ax.boxplot(data, labels=cats, patch_artist=True, showfliers=False)
|
| 236 |
+
for patch, colour in zip(bp["boxes"], PALETTE):
|
| 237 |
+
patch.set_facecolor(colour)
|
| 238 |
+
patch.set_alpha(0.7)
|
| 239 |
+
ax.set_ylabel("Abstract Length (words)")
|
| 240 |
+
ax.set_title("Abstract Length by Primary Category")
|
| 241 |
+
ax.spines[["top", "right"]].set_visible(False)
|
| 242 |
+
fig.tight_layout()
|
| 243 |
+
_savefig(fig, output_dir / "abstract_length_by_category.png")
|
| 244 |
+
|
| 245 |
+
|
| 246 |
+
def plot_authors_per_paper(df: pd.DataFrame, output_dir: Path) -> None:
|
| 247 |
+
"""Histogram of author counts per paper."""
|
| 248 |
+
author_counts = df["authors"].apply(len)
|
| 249 |
+
|
| 250 |
+
fig, ax = plt.subplots(figsize=(8, 5))
|
| 251 |
+
max_display = int(author_counts.quantile(0.99)) + 1
|
| 252 |
+
ax.hist(
|
| 253 |
+
author_counts.clip(upper=max_display),
|
| 254 |
+
bins=range(1, max_display + 2),
|
| 255 |
+
color=PALETTE[2],
|
| 256 |
+
edgecolor="white",
|
| 257 |
+
alpha=0.85,
|
| 258 |
+
align="left",
|
| 259 |
+
)
|
| 260 |
+
ax.set_xlabel("Number of Authors")
|
| 261 |
+
ax.set_ylabel("Frequency")
|
| 262 |
+
ax.set_title("Authors per Paper")
|
| 263 |
+
ax.spines[["top", "right"]].set_visible(False)
|
| 264 |
+
fig.tight_layout()
|
| 265 |
+
_savefig(fig, output_dir / "authors_per_paper.png")
|
| 266 |
+
|
| 267 |
+
|
| 268 |
+
def plot_publication_timeline(df: pd.DataFrame, output_dir: Path) -> None:
|
| 269 |
+
"""Monthly publication counts over time."""
|
| 270 |
+
dates = pd.to_datetime(df["published"], errors="coerce").dropna()
|
| 271 |
+
monthly = dates.dt.to_period("M").value_counts().sort_index()
|
| 272 |
+
|
| 273 |
+
fig, ax = plt.subplots(figsize=(10, 5))
|
| 274 |
+
ax.bar(
|
| 275 |
+
range(len(monthly)),
|
| 276 |
+
monthly.values,
|
| 277 |
+
color=PALETTE[1],
|
| 278 |
+
edgecolor="white",
|
| 279 |
+
width=1.0,
|
| 280 |
+
)
|
| 281 |
+
# Show a subset of tick labels to avoid crowding.
|
| 282 |
+
step = max(1, len(monthly) // 12)
|
| 283 |
+
tick_indices = list(range(0, len(monthly), step))
|
| 284 |
+
ax.set_xticks(tick_indices)
|
| 285 |
+
ax.set_xticklabels(
|
| 286 |
+
[str(monthly.index[i]) for i in tick_indices],
|
| 287 |
+
rotation=45,
|
| 288 |
+
ha="right",
|
| 289 |
+
fontsize=8,
|
| 290 |
+
)
|
| 291 |
+
ax.set_xlabel("Month")
|
| 292 |
+
ax.set_ylabel("Number of Papers")
|
| 293 |
+
ax.set_title("Publication Timeline (Monthly)")
|
| 294 |
+
ax.spines[["top", "right"]].set_visible(False)
|
| 295 |
+
fig.tight_layout()
|
| 296 |
+
_savefig(fig, output_dir / "publication_timeline.png")
|
| 297 |
+
|
| 298 |
+
|
| 299 |
+
def plot_top_terms_heatmap(
|
| 300 |
+
terms_by_cat: dict[str, list[tuple[str, float]]],
|
| 301 |
+
output_dir: Path,
|
| 302 |
+
top_n: int = 10,
|
| 303 |
+
) -> None:
|
| 304 |
+
"""Heatmap-style visualisation of top TF-IDF terms across categories."""
|
| 305 |
+
# Gather the union of top terms across all categories.
|
| 306 |
+
all_terms: list[str] = []
|
| 307 |
+
for cat in sorted(terms_by_cat):
|
| 308 |
+
for term, _ in terms_by_cat[cat][:top_n]:
|
| 309 |
+
if term not in all_terms:
|
| 310 |
+
all_terms.append(term)
|
| 311 |
+
|
| 312 |
+
cats = sorted(terms_by_cat.keys())
|
| 313 |
+
matrix = np.zeros((len(all_terms), len(cats)))
|
| 314 |
+
for j, cat in enumerate(cats):
|
| 315 |
+
term_map = dict(terms_by_cat[cat])
|
| 316 |
+
for i, term in enumerate(all_terms):
|
| 317 |
+
matrix[i, j] = term_map.get(term, 0.0)
|
| 318 |
+
|
| 319 |
+
fig, ax = plt.subplots(figsize=(10, max(6, 0.35 * len(all_terms))))
|
| 320 |
+
im = ax.imshow(matrix, aspect="auto", cmap="YlOrRd", interpolation="nearest")
|
| 321 |
+
ax.set_xticks(range(len(cats)))
|
| 322 |
+
ax.set_xticklabels(cats, fontsize=9)
|
| 323 |
+
ax.set_yticks(range(len(all_terms)))
|
| 324 |
+
ax.set_yticklabels(all_terms, fontsize=8)
|
| 325 |
+
ax.set_title("Top TF-IDF Terms by Category")
|
| 326 |
+
fig.colorbar(im, ax=ax, label="Mean TF-IDF Score", shrink=0.6)
|
| 327 |
+
fig.tight_layout()
|
| 328 |
+
_savefig(fig, output_dir / "tfidf_terms_heatmap.png")
|
| 329 |
+
|
| 330 |
+
|
| 331 |
+
# ---------------------------------------------------------------------------
|
| 332 |
+
# CLI
|
| 333 |
+
# ---------------------------------------------------------------------------
|
| 334 |
+
|
| 335 |
+
|
| 336 |
+
def parse_args() -> argparse.Namespace:
|
| 337 |
+
parser = argparse.ArgumentParser(
|
| 338 |
+
description="Exploratory Data Analysis for the CS/ML Academic Papers Dataset.",
|
| 339 |
+
)
|
| 340 |
+
parser.add_argument(
|
| 341 |
+
"--data-dir",
|
| 342 |
+
type=str,
|
| 343 |
+
default=str(DATA_DIR),
|
| 344 |
+
help=f"Local data directory (default: {DATA_DIR}).",
|
| 345 |
+
)
|
| 346 |
+
parser.add_argument(
|
| 347 |
+
"--from-hub",
|
| 348 |
+
type=str,
|
| 349 |
+
default=None,
|
| 350 |
+
help="Load the dataset from a HuggingFace Hub repo instead of locally.",
|
| 351 |
+
)
|
| 352 |
+
parser.add_argument(
|
| 353 |
+
"--plots-dir",
|
| 354 |
+
type=str,
|
| 355 |
+
default=str(PLOTS_DIR),
|
| 356 |
+
help=f"Directory for saved plots (default: {PLOTS_DIR}).",
|
| 357 |
+
)
|
| 358 |
+
parser.add_argument(
|
| 359 |
+
"--verbose",
|
| 360 |
+
action="store_true",
|
| 361 |
+
help="Enable debug logging.",
|
| 362 |
+
)
|
| 363 |
+
return parser.parse_args()
|
| 364 |
+
|
| 365 |
+
|
| 366 |
+
def main() -> None:
|
| 367 |
+
args = parse_args()
|
| 368 |
+
|
| 369 |
+
logging.basicConfig(
|
| 370 |
+
level=logging.DEBUG if args.verbose else logging.INFO,
|
| 371 |
+
format="%(asctime)s %(levelname)-8s %(name)s %(message)s",
|
| 372 |
+
datefmt="%Y-%m-%d %H:%M:%S",
|
| 373 |
+
)
|
| 374 |
+
|
| 375 |
+
# ------------------------------------------------------------------
|
| 376 |
+
# 1. Load data
|
| 377 |
+
# ------------------------------------------------------------------
|
| 378 |
+
if args.from_hub:
|
| 379 |
+
LOG.info("Loading dataset from HuggingFace Hub: %s", args.from_hub)
|
| 380 |
+
df = load_hub(args.from_hub)
|
| 381 |
+
else:
|
| 382 |
+
LOG.info("Loading dataset from local directory: %s", args.data_dir)
|
| 383 |
+
df = load_local(Path(args.data_dir))
|
| 384 |
+
|
| 385 |
+
LOG.info("Loaded %d papers.", len(df))
|
| 386 |
+
|
| 387 |
+
# ------------------------------------------------------------------
|
| 388 |
+
# 2. Summary statistics
|
| 389 |
+
# ------------------------------------------------------------------
|
| 390 |
+
print_summary(df)
|
| 391 |
+
|
| 392 |
+
# ------------------------------------------------------------------
|
| 393 |
+
# 3. TF-IDF keyword extraction
|
| 394 |
+
# ------------------------------------------------------------------
|
| 395 |
+
LOG.info("Computing TF-IDF top terms per category ...")
|
| 396 |
+
terms_by_cat = top_tfidf_terms(df)
|
| 397 |
+
print_top_terms(terms_by_cat)
|
| 398 |
+
|
| 399 |
+
# ------------------------------------------------------------------
|
| 400 |
+
# 4. Visualisations
|
| 401 |
+
# ------------------------------------------------------------------
|
| 402 |
+
plots_dir = Path(args.plots_dir)
|
| 403 |
+
plots_dir.mkdir(parents=True, exist_ok=True)
|
| 404 |
+
LOG.info("Generating visualisations -> %s", plots_dir)
|
| 405 |
+
|
| 406 |
+
plot_category_distribution(df, plots_dir)
|
| 407 |
+
plot_abstract_length_histogram(df, plots_dir)
|
| 408 |
+
plot_abstract_length_by_category(df, plots_dir)
|
| 409 |
+
plot_authors_per_paper(df, plots_dir)
|
| 410 |
+
plot_publication_timeline(df, plots_dir)
|
| 411 |
+
|
| 412 |
+
if terms_by_cat:
|
| 413 |
+
plot_top_terms_heatmap(terms_by_cat, plots_dir)
|
| 414 |
+
|
| 415 |
+
LOG.info("All plots saved to %s", plots_dir)
|
| 416 |
+
print(f"Visualisations saved to: {plots_dir}")
|
| 417 |
+
|
| 418 |
+
|
| 419 |
+
if __name__ == "__main__":
|
| 420 |
+
main()
|
requirements.txt
ADDED
|
@@ -0,0 +1,7 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
datasets>=2.14.0
|
| 2 |
+
arxiv>=2.1.0
|
| 3 |
+
pandas>=2.0.0
|
| 4 |
+
matplotlib>=3.7.0
|
| 5 |
+
scikit-learn>=1.3.0
|
| 6 |
+
huggingface_hub>=0.17.0
|
| 7 |
+
tqdm>=4.65.0
|