Datasets:
Modalities:
Text
Formats:
parquet
Languages:
English
Size:
100K - 1M
ArXiv:
Tags:
scientific-papers
arxiv
citation-prediction
author-prediction
collaboration-prediction
research-forecasting
License:
Upload README.md with huggingface_hub
Browse files
README.md
ADDED
|
@@ -0,0 +1,174 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
license: odc-by
|
| 3 |
+
task_categories:
|
| 4 |
+
- text-generation
|
| 5 |
+
- question-answering
|
| 6 |
+
tags:
|
| 7 |
+
- scientific-papers
|
| 8 |
+
- arxiv
|
| 9 |
+
- citation-prediction
|
| 10 |
+
- author-prediction
|
| 11 |
+
- collaboration-prediction
|
| 12 |
+
- research-forecasting
|
| 13 |
+
size_categories:
|
| 14 |
+
- 100K<n<1M
|
| 15 |
+
language:
|
| 16 |
+
- en
|
| 17 |
+
---
|
| 18 |
+
|
| 19 |
+
# PreScience: A Benchmark for Forecasting Scientific Contributions
|
| 20 |
+
|
| 21 |
+
## Dataset Summary
|
| 22 |
+
|
| 23 |
+
Can AI systems trained on the scientific record up to a fixed point in time forecast the scientific advances that follow? Such a capability could help researchers identify collaborators and impactful research directions, and anticipate which problems and methods will become central next. We introduce PreScience, a scientific forecasting benchmark that decomposes the research process into four interdependent generative tasks: collaborator prediction, prior work selection, contribution generation, and impact prediction. PreScience is a carefully curated dataset of 98,000 recent AI-related research papers (titles and abstracts), featuring disambiguated author identities, temporally aligned scholarly metadata, and a structured graph of companion author publication histories and citations spanning 502,000 total papers. We develop baselines and evaluations for each task, including LACERScore, a novel LLM-based measure of contribution similarity that outperforms previous metrics and nears inter-annotator agreement. We find substantial headroom remains in each task---e.g. in contribution generation, frontier LLMs achieve only moderate similarity to the ground-truth (GPT-5, averages 5.6 on a 1-10 scale). When composed into a 12-month end-to-end simulation of scientific production, the resulting synthetic corpus is systematically less diverse and less novel than human-authored research from the same period.
|
| 24 |
+
|
| 25 |
+
## Dataset Statistics
|
| 26 |
+
|
| 27 |
+
| Split | Target Papers | Total Papers | Unique Authors | Date Range |
|
| 28 |
+
|-------|--------------|--------------|----------------|-------------|
|
| 29 |
+
| Train | 44,990 | 373,716 | 106,913 | Oct 2023 - Oct 2024 |
|
| 30 |
+
| Test | 52,836 | 464,942 | 129,020 | Oct 2024 - Oct 2025 |
|
| 31 |
+
| **Total** | **97,826** | **501,866** | **182,727** | Oct 2023 - Oct 2025 |
|
| 32 |
+
|
| 33 |
+
**arXiv Categories**: cs.CL, cs.LG, cs.AI, cs.ML, cs.CV, cs.IR, cs.NE
|
| 34 |
+
|
| 35 |
+
**Average Statistics** (computed over target papers):
|
| 36 |
+
- Authors per paper: 5.15
|
| 37 |
+
- Words in abstract: 187.1
|
| 38 |
+
- Key references: 3.08 (median: 3)
|
| 39 |
+
- Citations at 12 months: 5.57
|
| 40 |
+
|
| 41 |
+
## Dataset Structure
|
| 42 |
+
|
| 43 |
+
### Dataset Construction
|
| 44 |
+
|
| 45 |
+
PreScience is built from research papers posted to arXiv from October 2023 to October 2025 in seven AI-adjacent categories: **cs.CL, cs.LG, cs.AI, cs.ML, cs.CV, cs.IR, and cs.NE**. These constitute the **target papers** in our benchmark. Papers are represented by their titles and abstracts (full texts are not included).
|
| 46 |
+
|
| 47 |
+
We include a set of **companion papers** consisting of:
|
| 48 |
+
- Key references of target papers
|
| 49 |
+
- Prior publications of target authors
|
| 50 |
+
- Key references of those prior publications
|
| 51 |
+
|
| 52 |
+
Together, these form the historical corpus H<sub><t</sub> used to condition all tasks.
|
| 53 |
+
|
| 54 |
+
### Ensuring Dataset Quality
|
| 55 |
+
|
| 56 |
+
We apply several design choices to ensure that PreScience supports reliable modeling and evaluation rather than reflecting artifacts of noisy metadata or degenerate task instances:
|
| 57 |
+
|
| 58 |
+
- **Author disambiguation**: We disambiguate author profiles using the S2AND pipeline (Subramanian et al., 2021), yielding better author clusters than the current Semantic Scholar Academic Graph release
|
| 59 |
+
- **Key reference filtering**: We restrict target papers to those with between 1 and 10 key references, excluding instances with zero or unusually large key-reference sets
|
| 60 |
+
- **Temporal alignment**: All author- and reference-level metadata (publication counts, citation counts, h-indices) are temporally aligned to each paper's publication date to prevent leakage of future information into task inputs
|
| 61 |
+
|
| 62 |
+
### Files
|
| 63 |
+
|
| 64 |
+
This dataset contains:
|
| 65 |
+
|
| 66 |
+
1. **`train.parquet`**: Training period papers (373,716 papers from Oct 2023 - Oct 2024)
|
| 67 |
+
2. **`test.parquet`**: Test period papers (464,942 papers from Oct 2024 - Oct 2025)
|
| 68 |
+
3. **`author_disambiguation.jsonl`**: Mapping from S2AND-disambiguated author ID → S2AG author IDs
|
| 69 |
+
4. **`author_publications.jsonl`**: Mapping from S2AND-disambiguated author ID → S2AG corpus IDs of their publications
|
| 70 |
+
|
| 71 |
+
### Paper Schema
|
| 72 |
+
|
| 73 |
+
#### Roles
|
| 74 |
+
Papers in the dataset are each assigned a subset of the following roles:
|
| 75 |
+
- `target`: Primary evaluation papers (Oct 2023-2024 for train, Oct 2024-2025 for test)
|
| 76 |
+
- `target.key_reference`: Highly influential papers cited by targets
|
| 77 |
+
- `target.author.publication_history`: Prior work by target paper authors
|
| 78 |
+
- `target.author.publication_history.key_reference`: Key refs of authors' prior work
|
| 79 |
+
|
| 80 |
+
Each paper record contains:
|
| 81 |
+
|
| 82 |
+
```python
|
| 83 |
+
{
|
| 84 |
+
# Basic metadata (available for all papers)
|
| 85 |
+
"corpus_id": str, # S2AG corpus ID
|
| 86 |
+
"arxiv_id": str, # arXiv identifier
|
| 87 |
+
"date": str, # Publication date (YYYY-MM-DD)
|
| 88 |
+
"categories": list[str], # arXiv categories
|
| 89 |
+
"title": str, # Paper title
|
| 90 |
+
"abstract": str, # Paper abstract
|
| 91 |
+
"roles": list[str], # Paper roles in dataset
|
| 92 |
+
|
| 93 |
+
# Citation data (available for target papers [guaranteed] and target.author.publication_history papers [best-effort])
|
| 94 |
+
"key_references": list[{ # Highly influential references
|
| 95 |
+
"corpus_id": str,
|
| 96 |
+
"num_citations": int # Citations at target paper date
|
| 97 |
+
}],
|
| 98 |
+
|
| 99 |
+
# Author data (availability for target papers [guaranteed] and target.author.publication_history papers [best-effort])
|
| 100 |
+
"authors": list[{ # Author roster
|
| 101 |
+
"author_id": str, # S2AND-disambiguated ID
|
| 102 |
+
"name": str,
|
| 103 |
+
"publication_history": list[str], # Prior corpus_ids
|
| 104 |
+
"h_index": int, # At target paper date
|
| 105 |
+
"num_papers": int,
|
| 106 |
+
"num_citations": int
|
| 107 |
+
}],
|
| 108 |
+
|
| 109 |
+
# Impact data (target papers only)
|
| 110 |
+
"citation_trajectory": list[int] # Monthly cumulative citation counts
|
| 111 |
+
}
|
| 112 |
+
```
|
| 113 |
+
|
| 114 |
+
|
| 115 |
+
## Usage
|
| 116 |
+
|
| 117 |
+
### Using with PreScience codebase
|
| 118 |
+
|
| 119 |
+
The [PreScience codebase](https://github.com/allenai/prescience) includes a helper function to load data from HuggingFace:
|
| 120 |
+
|
| 121 |
+
```python
|
| 122 |
+
import utils
|
| 123 |
+
|
| 124 |
+
# Load from HuggingFace
|
| 125 |
+
all_papers, author_disambiguation, embeddings = utils.load_corpus(
|
| 126 |
+
hf_repo_id="allenai/prescience",
|
| 127 |
+
split="test",
|
| 128 |
+
embeddings_dir="./embeddings", # Optional: for embedding-based baselines
|
| 129 |
+
embedding_type="grit" # Optional: gtr, specter2, or grit
|
| 130 |
+
)
|
| 131 |
+
```
|
| 132 |
+
|
| 133 |
+
### Adhoc Loading
|
| 134 |
+
|
| 135 |
+
```python
|
| 136 |
+
from datasets import load_dataset
|
| 137 |
+
|
| 138 |
+
# Load dataset
|
| 139 |
+
dataset = load_dataset("allenai/prescience")
|
| 140 |
+
|
| 141 |
+
# Access a paper
|
| 142 |
+
paper = dataset["test"][0]
|
| 143 |
+
print(f"Title: {paper['title']}")
|
| 144 |
+
print(f"Authors: {len(paper['authors'])}")
|
| 145 |
+
print(f"Roles: {paper['roles']}")
|
| 146 |
+
```
|
| 147 |
+
|
| 148 |
+
|
| 149 |
+
|
| 150 |
+
## Computing Embeddings
|
| 151 |
+
|
| 152 |
+
Embeddings are not included in this dataset, but can be computed using the `dataset/embeddings/compute_paper_embeddings.py` script provided with the [PreScience codebase](https://github.com/allenai/prescience).
|
| 153 |
+
|
| 154 |
+
|
| 155 |
+
|
| 156 |
+
## Citation
|
| 157 |
+
|
| 158 |
+
```bibtex
|
| 159 |
+
@article{prescience2025,
|
| 160 |
+
title={PreScience: A Benchmark for Forecasting Scientific Contributions},
|
| 161 |
+
author={Anirudh Ajith, Amanpreet Singh, Jay DeYoung, Nadav Kunievsky, Austin C. Kozlowski, Oyvind Tafjord, James Evans, Daniel S Weld, Tom Hope, Doug Downey},
|
| 162 |
+
journal={[TBD]},
|
| 163 |
+
year={2026}
|
| 164 |
+
}
|
| 165 |
+
```
|
| 166 |
+
|
| 167 |
+
## License
|
| 168 |
+
|
| 169 |
+
ODC-BY License
|
| 170 |
+
|
| 171 |
+
## Links
|
| 172 |
+
|
| 173 |
+
- **Repository**: https://github.com/allenai/prescience
|
| 174 |
+
- **Paper**: [TBD]
|