Datasets:
Modalities:
Text
Formats:
parquet
Languages:
English
Size:
100K - 1M
Tags:
scientific-papers
arxiv
citation-prediction
author-prediction
collaboration-prediction
research-forecasting
License:
| license: odc-by | |
| task_categories: | |
| - text-generation | |
| - question-answering | |
| tags: | |
| - scientific-papers | |
| - arxiv | |
| - citation-prediction | |
| - author-prediction | |
| - collaboration-prediction | |
| - research-forecasting | |
| size_categories: | |
| - 100K<n<1M | |
| language: | |
| - en | |
| # PreScience: A Dataset and Benchmark for Scientific Forecasting | |
| ## Dataset Summary | |
| Can AI systems trained on the existing scientific record forecast the advances that will follow? We introduce PreScience, a dataset and benchmark for scientific forecasting built around 98K recent AI research papers, together with companion papers covering author publication histories and citation links, yielding 502K papers in total. The resulting paper records include titles, abstracts, disambiguated author identities, influential references, topic labels, citation trajectories, and metadata snapshotted to respect temporal cutoffs. We instantiate seven exemplar tasks: five paper-anchored tasks---contribution generation, collaborator prediction, prior work selection, citation count prediction, and future combination prediction---and two aggregate topic trend forecasting variants. We develop baselines ranging from simple heuristics and embedding methods to frontier language models and agentic systems, and introduce LACER, an LLM-based metric for evaluating similarity of generated contribution descriptions that agrees better with human judgments than existing metrics. Finally, we compose task models to generate a 12-month synthetic corpus and find that the resulting papers to be systematically less diverse and less novel than human-authored research from the same period. | |
| ## Dataset Statistics | |
| | Split | Target Papers | Total Papers | Unique Authors | Date Range | | |
| |-------|--------------|--------------|----------------|-------------| | |
| | Train | 44,984 | 373,707 | 106,904 | Oct 2023 - Sept 2024 | | |
| | Test | 52,836 | 464,942 | 129,020 | Oct 2024 - Sept 2025 | | |
| | **Total** | **97,820** | **501,866** | **182,727** | Oct 2023 - Sept 2025 | | |
| **arXiv Categories**: cs.CL, cs.LG, cs.AI, cs.CV, cs.IR, cs.NE | |
| **Average Statistics** (computed over target papers): | |
| - Authors per paper: 5.15 | |
| - Words in abstract: 187.1 | |
| - Key references: 3.08 (median: 3) | |
| - Citations at 12 months: 5.57 | |
| ## Dataset Structure | |
| ### Dataset Construction | |
| PreScience is built from research papers posted to arXiv from October 2023 to September 2025 in six AI-adjacent categories: **cs.CL, cs.LG, cs.AI, cs.CV, cs.IR, and cs.NE**. These constitute the **target papers** in our benchmark. Papers are represented by their titles and abstracts (full texts are not included). | |
| We include a set of **companion papers** consisting of: | |
| - Key references of target papers | |
| - Prior publications of target authors | |
| - Key references of those prior publications | |
| Together, these form the historical corpus H<sub><t</sub> used to condition all tasks. | |
| ### Ensuring Dataset Quality | |
| We apply several design choices to ensure that PreScience supports reliable modeling and evaluation rather than reflecting artifacts of noisy metadata or degenerate task instances: | |
| - **Author disambiguation**: We disambiguate author profiles using the S2AND pipeline (Subramanian et al., 2021), yielding better author clusters than the current Semantic Scholar Academic Graph release | |
| - **Key reference filtering**: We restrict target papers to those with between 1 and 10 key references, excluding instances with zero or unusually large key-reference sets | |
| - **Temporal alignment**: All author- and reference-level metadata (publication counts, citation counts, h-indices) are temporally aligned to each paper's publication date to prevent leakage of future information into task inputs | |
| ### Files | |
| This dataset contains: | |
| 1. **`train.parquet`**: Training period papers (373,707 papers from Oct 2023 - Sept 2024) | |
| 2. **`test.parquet`**: Test period papers (464,942 papers from Oct 2024 - Sept 2025) | |
| 3. **`author_disambiguation.jsonl`**: Mapping from S2AND-disambiguated author ID → S2AG author IDs | |
| 4. **`author_publications.jsonl`**: Mapping from S2AND-disambiguated author ID → S2AG corpus IDs of their publications | |
| ### Notes on author mapping files | |
| The two author mapping files (`author_disambiguation.jsonl` and `author_publications.jsonl`) are the merged outputs of the S2AND disambiguation pass run over a broader pre-pruning mention pool than the final corpus. | |
| - **Empty publication lists are expected.** Some sd_ids have empty publication lists either because (a) the author has publications only in the *other* split's corpus, or (b) every paper carrying that author's signature was removed during corpus filtering (non-arXiv versions, missing abstracts, max-key-references caps, etc.). The disambiguation cluster identity is preserved in `author_disambiguation.jsonl` regardless. | |
| - **Keys are unioned across splits.** For each sd_id present in both train and test, the published `author_publications.jsonl` and `author_disambiguation.jsonl` entries are the per-key union of the two splits' values. Treat these as global per-cluster aggregations, not per-split views. | |
| ### Paper Schema | |
| #### Roles | |
| Papers in the dataset are each assigned a subset of the following roles: | |
| - `target`: Primary evaluation papers (Oct 2023 - Sept 2024 for train, Oct 2024 - Sept 2025 for test) | |
| - `target.key_reference`: Highly influential papers cited by targets | |
| - `target.author.publication_history`: Prior work by target paper authors | |
| - `target.author.publication_history.key_reference`: Key refs of authors' prior work | |
| Each paper record contains: | |
| ```python | |
| { | |
| # Basic metadata (available for all papers) | |
| "corpus_id": str, # S2AG corpus ID | |
| "arxiv_id": str, # arXiv identifier | |
| "date": str, # Publication date (YYYY-MM-DD) | |
| "categories": list[str], # arXiv categories | |
| "title": str, # Paper title | |
| "abstract": str, # Paper abstract | |
| "topics": list[str], # Multi-label topic assignments (target papers only; from a fixed list of 202 topics) | |
| "roles": list[str], # Paper roles in dataset | |
| # Citation data (available for target papers [guaranteed] and target.author.publication_history papers [best-effort]) | |
| "key_references": list[{ # Highly influential references | |
| "corpus_id": str, | |
| "num_citations": int # Citations at target paper date | |
| }], | |
| # Author data (availability for target papers [guaranteed] and target.author.publication_history papers [best-effort]) | |
| "authors": list[{ # Author roster | |
| "author_id": str, # S2AND-disambiguated ID | |
| "name": str, | |
| "publication_history": list[str], # Prior corpus_ids | |
| "h_index": int, # At target paper date | |
| "num_papers": int, | |
| "num_citations": int | |
| }], | |
| # Impact data (target papers only) | |
| "citation_trajectory": list[int] # Monthly cumulative citation counts | |
| } | |
| ``` | |
| ## Usage | |
| ### Using with PreScience codebase | |
| The PreScience codebase includes a helper function to load data from HuggingFace: | |
| ```python | |
| import utils | |
| # Load from HuggingFace | |
| all_papers, author_disambiguation, embeddings = utils.load_corpus( | |
| hf_repo_id="PreSciencePreScience/PreScience", | |
| split="test", | |
| embeddings_dir="./embeddings", # Optional: for embedding-based baselines | |
| embedding_type="grit" # Optional: gtr, specter2, or grit | |
| ) | |
| ``` | |
| ### Adhoc Loading | |
| ```python | |
| from datasets import load_dataset | |
| # Load dataset | |
| dataset = load_dataset("PreSciencePreScience/PreScience") | |
| # Access a paper | |
| paper = dataset["test"][0] | |
| print(f"Title: {paper['title']}") | |
| print(f"Authors: {len(paper['authors'])}") | |
| print(f"Roles: {paper['roles']}") | |
| ``` | |
| ## Computing Embeddings | |
| Embeddings are not included in this dataset, but can be computed using the `dataset/embeddings/compute_paper_embeddings.py` script provided with the PreScience codebase. | |