prescience / README.md
dcdowney's picture
shorter abstract in README.md
24ee5ca verified
|
raw
history blame
6.9 kB
---
license: odc-by
task_categories:
- text-generation
- question-answering
tags:
- scientific-papers
- arxiv
- citation-prediction
- author-prediction
- collaboration-prediction
- research-forecasting
size_categories:
- 100K<n<1M
language:
- en
---
# PreScience: A Benchmark for Forecasting Scientific Contributions
## Dataset Summary
Can AI systems trained on the scientific record up to a fixed point in time forecast the scientific advances that follow? Such a capability could help researchers identify collaborators and impactful research directions, and anticipate which problems and methods will become central next. We introduce PreScience, a scientific forecasting benchmark that decomposes the research process into four interdependent generative tasks: collaborator prediction, prior work selection, contribution generation, and impact prediction. PreScience is a carefully curated dataset of 98,000 recent AI-related research papers (titles and abstracts), featuring disambiguated author identities, temporally aligned scholarly metadata, and a structured graph of companion author publication histories and citations spanning 502,000 total papers.
## Dataset Statistics
| Split | Target Papers | Total Papers | Unique Authors | Date Range |
|-------|--------------|--------------|----------------|-------------|
| Train | 44,990 | 373,716 | 106,913 | Oct 2023 - Oct 2024 |
| Test | 52,836 | 464,942 | 129,020 | Oct 2024 - Oct 2025 |
| **Total** | **97,826** | **501,866** | **182,727** | Oct 2023 - Oct 2025 |
**arXiv Categories**: cs.CL, cs.LG, cs.AI, cs.ML, cs.CV, cs.IR, cs.NE
**Average Statistics** (computed over target papers):
- Authors per paper: 5.15
- Words in abstract: 187.1
- Key references: 3.08 (median: 3)
- Citations at 12 months: 5.57
## Dataset Structure
### Dataset Construction
PreScience is built from research papers posted to arXiv from October 2023 to October 2025 in seven AI-adjacent categories: **cs.CL, cs.LG, cs.AI, cs.ML, cs.CV, cs.IR, and cs.NE**. These constitute the **target papers** in our benchmark. Papers are represented by their titles and abstracts (full texts are not included).
We include a set of **companion papers** consisting of:
- Key references of target papers
- Prior publications of target authors
- Key references of those prior publications
Together, these form the historical corpus H<sub><t</sub> used to condition all tasks.
### Ensuring Dataset Quality
We apply several design choices to ensure that PreScience supports reliable modeling and evaluation rather than reflecting artifacts of noisy metadata or degenerate task instances:
- **Author disambiguation**: We disambiguate author profiles using the S2AND pipeline (Subramanian et al., 2021), yielding better author clusters than the current Semantic Scholar Academic Graph release
- **Key reference filtering**: We restrict target papers to those with between 1 and 10 key references, excluding instances with zero or unusually large key-reference sets
- **Temporal alignment**: All author- and reference-level metadata (publication counts, citation counts, h-indices) are temporally aligned to each paper's publication date to prevent leakage of future information into task inputs
### Files
This dataset contains:
1. **`train.parquet`**: Training period papers (373,716 papers from Oct 2023 - Oct 2024)
2. **`test.parquet`**: Test period papers (464,942 papers from Oct 2024 - Oct 2025)
3. **`author_disambiguation.jsonl`**: Mapping from S2AND-disambiguated author ID → S2AG author IDs
4. **`author_publications.jsonl`**: Mapping from S2AND-disambiguated author ID → S2AG corpus IDs of their publications
### Paper Schema
#### Roles
Papers in the dataset are each assigned a subset of the following roles:
- `target`: Primary evaluation papers (Oct 2023-2024 for train, Oct 2024-2025 for test)
- `target.key_reference`: Highly influential papers cited by targets
- `target.author.publication_history`: Prior work by target paper authors
- `target.author.publication_history.key_reference`: Key refs of authors' prior work
Each paper record contains:
```python
{
# Basic metadata (available for all papers)
"corpus_id": str, # S2AG corpus ID
"arxiv_id": str, # arXiv identifier
"date": str, # Publication date (YYYY-MM-DD)
"categories": list[str], # arXiv categories
"title": str, # Paper title
"abstract": str, # Paper abstract
"roles": list[str], # Paper roles in dataset
# Citation data (available for target papers [guaranteed] and target.author.publication_history papers [best-effort])
"key_references": list[{ # Highly influential references
"corpus_id": str,
"num_citations": int # Citations at target paper date
}],
# Author data (availability for target papers [guaranteed] and target.author.publication_history papers [best-effort])
"authors": list[{ # Author roster
"author_id": str, # S2AND-disambiguated ID
"name": str,
"publication_history": list[str], # Prior corpus_ids
"h_index": int, # At target paper date
"num_papers": int,
"num_citations": int
}],
# Impact data (target papers only)
"citation_trajectory": list[int] # Monthly cumulative citation counts
}
```
## Usage
### Using with PreScience codebase
The [PreScience codebase](https://github.com/allenai/prescience) includes a helper function to load data from HuggingFace:
```python
import utils
# Load from HuggingFace
all_papers, author_disambiguation, embeddings = utils.load_corpus(
hf_repo_id="allenai/prescience",
split="test",
embeddings_dir="./embeddings", # Optional: for embedding-based baselines
embedding_type="grit" # Optional: gtr, specter2, or grit
)
```
### Adhoc Loading
```python
from datasets import load_dataset
# Load dataset
dataset = load_dataset("allenai/prescience")
# Access a paper
paper = dataset["test"][0]
print(f"Title: {paper['title']}")
print(f"Authors: {len(paper['authors'])}")
print(f"Roles: {paper['roles']}")
```
## Computing Embeddings
Embeddings are not included in this dataset, but can be computed using the `dataset/embeddings/compute_paper_embeddings.py` script provided with the [PreScience codebase](https://github.com/allenai/prescience).
## Citation
```bibtex
@article{prescience2025,
title={PreScience: A Benchmark for Forecasting Scientific Contributions},
author={Anirudh Ajith, Amanpreet Singh, Jay DeYoung, Nadav Kunievsky, Austin C. Kozlowski, Oyvind Tafjord, James Evans, Daniel S Weld, Tom Hope, Doug Downey},
journal={[TBD]},
year={2026}
}
```
## License
ODC-BY License
## Links
- **Repository**: https://github.com/allenai/prescience
- **Paper**: [TBD]